this should be so simple, and probably is...
I need something like this:
[finish time] - MIN([finish time])
or
DATEDIFF('second',MIN([finish time]), [finish time])
but, of course, I'd be mixing aggregate and non-aggregate values
so it doesn't work.
I've looked but I cannot find what should be a stupid-simple solution.
thanks,
joe
Well, I believe you are trying to compare the finish time of an entry with the minimum finish time in the table, is that correct?
The right way to calculate that is using a table calculation
MAX([finish time]) - WINODW_MIN(MAX([finish time])
Don't worry about the MAX()s, they are meant to aggregate the values only.
Now important, your worksheet should have the level of detail you need, i.e., if each entry has an unique Id, you should drag [Id] to the worksheet, together with this new calculated field, so it can work. Table calculations works with partitions made from what is there on the worksheet.
Related
am trying to get Previous Sum(of someField) based on a variable value which is an Id.
This is not a table, Im doing a KPI
On Qlik you would do something like:
SUM({<Id={"$(=Max(vVariable),-1))"}>} someField)
But I can not achieve it on Tableau, off course is due to my lack of knowledge, unfortunatelly time is tinking at work and wanted to see if anyone has any input!
Thanks
Assuming you may use a sample input like the Superstore (using sales as metric), this could be what you're looking for:
In red you can see your "variable" which allows you to select a value and in blue you'll find the unique row for the previous value (Order ID sorted).
The first thing you need to to do is creating a parameter based on all the Order ID values:
Then things start to get a bit complicated if you're not familiar with LOD (Level of details) and the order of execution in Tableau, especially for filters.
Assuming that you can get some information on your own (otherwise, feel free to ask), the first thing you nee to to do is to "pre-calculate" the equivalent of a table having a rowe for each Order ID, in which you also have the previous Order ID value.
You can achive this combining Fixed (LOD) and Lookup function, creating this Calculated Field "Lookup Order ID":
LOOKUP( max({ FIXED [Order ID] : MAX([Order ID])}),1)
This is actually just a calculated field that you want to "fix" because you need the filter to act after you have made that previous calculus, and then you shift your data by 1 row backward.
Once you've done that, you just nee to create another calculated field in order to test your parametric value, and it could be something like this "check param":
[Lookup Order ID] = [Order ID param]
Moving this calculated field in the filter section and selecting just "true" values, you'll get that unique rows like in the initial image, showing the previous value (blue) related to the one you select in the parameter drop-down menu (red).
I have a spreadsheet of support case management data. I am working with this in Tableau. Each line in the spreadsheet is an individual case. Each case has, among much else, a support agent name and a Yes or No of whether the case work was started within 12 hours. I'd like to know, for each agent, what percentage of the time they started the case work within 12 hours. So, if Bob has 2 "No" and 8 "Yes", he should have 8 / (2 + 8) = 80%.
My attempt at this was to create 2 sets. One is the set of "Yes, started within 12 hours" (those that have "yes" in that field, and one that is the set of "No, not started within 12 hours", the complement to the other set. Silly me, I thought I could do something like COUNT(yeses) / COUNT(nos). Nope, big red failure. So what is the right way to do this?
It would help immensely to please respond as if this is the first thing I have ever done in Tableau. It is. I have learned a lot in this project, but only in comparison to the nothing I knew previously. Please also let me know if I've left out something necessary to answer this. I've tried to be complete but, well, am noob...
If it clarifies anything, here's a poor Excel mockup of data and the effect I'm looking for:
Yes, this is possible and easy in Tableau, but first a couple of points.
The reason your attempt to use COUNT() did not work is that COUNT() does not operate the way you, and that 99% of the people on the planet, expect. COUNT([some expression]) returns the number of records that have a non-null value, any value, for [some expression]. The name comes from SQL relational databases.
The calculations would be just a bit simpler if your third column took the boolean values True or False instead of the string values “Yes” or “No”. (In which case you could drop ‘= “Yes”’ from the formula below)
So two ways to do your calculation are:
Directly with an aggregate calculation which can get the right result, but is hard coded for this case, such as:
SUM(INT([Started within 24 hrs?] = “Yes”)) / SUM([Number of Records])
Using a table calc - which in this case is a bit easier and more flexible.
First, Build a table or viz in Tableau showing SUM([Number of Records]) with the dimensions you care about in play. Say with [Name] on Rows, [Started within 24 hrs?] on Columns and SUM([Number of Records] on Text. Second, Right click on your measure SUM([Number of Records]) and choose Percentage of Total from the Quick Table Calc menu. Finally, use that same menu to adjust Compute Using to specify how you want the percentages computed - in this case, using [Started within 24 hrs?]
If you only want to show some of the data, right click on the column header for the values you wish to hide and choose Hide.
The type conversion function INT() converts True to 1 and False to 0.
You could create another column that converts the Yes's into 1's, and the No's into 0's. Sum up all the 1's and divide it by the total and that's your percentage.
edit: the new column would look something like
=IF(C3="Yes",1,0)
in other words, if Cⁿ is "Yes", then 1, else 0
I'm trying to create a Measure in Power BI using DAX that achieves the below.
The data set has four columns, Name, Month, Country and Value. I have duplicates so first I need to dedupe across all four columns, then group by Month and sum up the value. And then, I need to average across the Month to arrive at a single value. How would I achieve this in DAX?
I figured it out. Reply by #OscarLar was very close but nested SUMMARIZE causes problems because it cannot aggregate values calculated dynamically within the query itself (https://www.sqlbi.com/articles/nested-grouping-using-groupby-vs-summarize/).
I kept the inner SUMMARIZE from #OscarLar's answer changed the outer SUMMARIZE with a GROUPBY. Here's the code that worked.
AVERAGEX(GROUPBY(SUMMARIZE(Data, Data[Name], Data[Month], Data[Country], Data[Value]), Data[Month], "Month_Value", sumx(CURRENTGROUP(), Data[Value])), [Month_Value])
Not sure I completeley understood the question since you didn't provide example data or some DAX code you've already tried. Please do so next time.
I'm assuming parts of this can not (for reasons) be done using power query so that you have to use DAX. Then I think this will do what you described.
Create a temporary data table called Data_reduced in which duplicate rows have been removed.
Data_reduced =
SUMMARIZE(
'Data';
[Name];
[Month];
[Country];
[Value]
)
Then create the averaging measure like this
AveragePerMonth =
AVERAGEX(
SUMMARIZE(
'Data_reduced';
'Data_reduced'[Month];
"Sum_month"; SUM('Data_reduced'[Value])
);
[Sum_month]
)
Where Data is the name of the table.
I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)
Tough problem I am working on here.
I have a table of CustomerIDs and CallDates. I want to measure whether there is a 'repeat call' within a certain period of time (up to 30 days).
I plan on creating a parameter called RepeatTime which is a range from 0 - 30 days, so the user can slide a scale to see the number/percentage of total repeats.
In Excel, I have this working. I sort CustomerID in order and then sort CallDate from earliest to latest. I then have formulas like:
=IF(AND(CurrentCustomerID = FutureCustomerID, FutureCallDate - CurrentCallDate <= RepeatTime), 1,0)
CurrentCustomerID = the current row, and the FutureCustomerID = the following row (so it is saying if the customer ID is the same).
FutureCallDate = the following row and the CurrentCallDate = the current row. It is subtracting the future call time from the first call time to measure the time in between.
The goal is to be able to see, dynamically, how many customers called in for a specific reason within maybe 4 hours or 1 day or 5 days, etc. All of the way up until 30 days (this is our actual metric but it is good to see the calls which are repeats within a shorter time frame so we can investigate).
I had a similar problem, see here for detailed version Array calculation in Tableau, maxif routine
In your case, that is basically the same thing as mine, so you could apply that solution, but I find it easier to understand the one I'm about to give, I would do:
1) Create a calculated field called RepeatTime:
DATEDIFF('day',MAX(CallDates),LOOKUP(MAX(CallDates),-1))
This will calculated how many days have passed since the last call to the current. You can add a IFNULL not to get Null values for the first entry.
2) Drag CustomersID, CallDates and RepeatTime to the worksheet (can be on the marks tab, don't need to be on rows or column).
3) Configure the table calculation of RepeatTIme, Compute using Advanced..., partitioning CustomersID, Adressing CallDates
Also Sort by Field CallDates, Maximum, Ascending.
This will guarantee the table calculation works properly
4) Now you have a base that you can use for what you need. You can either export it to csv or mdb and connect to it.
The best approach, actually, is to have this RepeatTime field calculated outside Tableau, on your database, so it's already there when you connect to it. But this is a way to use Tableau to do the calculation for you.
Unfortunately there's no direct way to do this directly with your database.