I would be very grateful for any assistance on solving the below.
I have a table in Microsoft Power BI Desktop containing 2 types of columns: 1 column containing unique employee IDs (column labeled "Employee ID"), and several other columns marking the date when (if ever) an individual completed a specific task (columns labeled "Task 1," "Task 2," "Task 3," etc.). If an employee completes a given task, s/he receives a bonus.
Using Power BI Desktop, what is the best way to accomplish the following objectives:
(1) Assign a numerical value to the completion of each of these tasks/columns (i.e., the bonus), such that future business users (read: non-developers) can adjust the value/bonus as it changes year to year.
(2) Allow business years to manually enter (a) an employee ID and (b) a fiscal year (e.g., 2021), with the result that Power BI determines whether that employee completed any of the tasks within that fiscal year, and returns the sum total of the values assigned above for all those tasks/columns.
Thank you very much for your assistance! = )
For your second objective:
The easiest way is to prepare a report (visualizations - tables / charts) and add appropriate filters / slicers. The user will then be able to select the year and the employee.
For the first: it depends on what you really mean by assigning a numerical value.
You can prepare a table containing the bonus value for a given task/year. And then a measure referring to it. (e.g. LOOKUPVALUE)
Related
Crystal reports don't let me use a custom count formula field to filter which transactions to show in a manager report.
I'm creating a Crystal report that team leaders are supposed to take out to see on how many occasions their employees have reported in sick. A record is only supposed to show if that person has reported in sick 6 or more times the last 12 months.
The report shows a record (a page) for each employee belonging to the managers organisational unit. Below the employee information is a subreport where I show the transactions from the salary/time system. Using select expert, I have filtered out the transactions that is supposed to show. I have then created a database field that count which day was 12 months back from today, and filtered so that only the transactions falling into this period shows.
My last problem is that I only want to show the record that has a minimum of 6 such transactions during the period. I created a formula field named #Antal ("amount" in Swedish) that simply counts the distinct number of dates in the "from"-date for the salary transactions I'm showing (since a change of law 2019-01-01 we needed to create a new transaction type, so some of the occasions after 2019 may have two transactions referring to one sick leave, thus I'm counting the first day of the period instead), DistinctCount ({P_LSTAT.P_SXXX06})
Now, the subreport has a new column with Antal (amount) that counts the amount of the desired salary transaction. I then try to use the selection formula to only show records where {#Antal} >= 6 but I get the following error:
This formula cannot be used because it must be evaluated later
Is there any other (better) way of doing this, or am I simply missing something?
For your selection based on {#Antal} >= 6 you need to use the group selection formula, not the record selection formula. Record selection is used to select records which meet the criteria before reading in the data. Group selection is used to filter out entire groups of records based on summarised values, after the records have been read in and the summaries calculated - which sounds like exactly what you need here.
The value of a Formula Field is out of scope when the Select Expert is evaluated.
There is no process for calculating the value of a Formula Field before it is printed within the section of the report it is placed. The Select Expert is evaluated prior to any section of the report being printed, so at this time all Formula Fields are effectively Nothing.
I have a table of users and another table of transactions.
The transactions all have a date against them. What I am trying to ascertain for each user is the average time between transactions.
User | Transaction Date
-----+-----------------
A | 2001-01-01
A | 2001-01-10
A | 2001-01-12
Consider the above transactions for user A. I am basically looking for the distance from one transaction to the next chronologically to determine the distances.
There are 9 days between transactions one and two; and there are 2 days between transactions three and four. The average of these is obviously 4.5, so I would want to identify the average time between user A's transactions to be 4.5 days.
Any idea of how to achieve this in Tableau?
I am trying to create a calculated field for each transaction to identify the date of the "next" transaction but I am struggling.
{ FIXED [user id] : MIN(IF [Transaction Date] > **this transaction date** THEN [Transaction Date]) }
I am not sure what to replace this transaction date with or whether this is the right approach at all.
Any advice would be greatly appreciated.
LODs dont have access to previous values directly, so you need to create a self join in your data connection. Follow below steps to achieve what you want.
Create a self join with your data with following criteria
Create an LOD calculation as below
{FIXED [User],[Transaction Date]:
MIN(DATEDIFF('day',[Transaction Date],[Transaction Date (Data1)]))
}
Build the View
PS: If you want to improve the performance, Custom SQL might be the way.
The only type of calculation that can take order sequence into account (e.g., when the value for a calculated field depends on the value of the immediately preceding row) is a table calc. You can't use an LOD calc for this kind of problem.
You'll need to understand how partitioning and addressing works with table calcs, along with specifying your sort order criteria. See the online help. You can then do something like, for example, define days_since_last_transaction as:
if first() > 0 then min([Transaction Date]) -
lookup(min([Transaction Date]), -1) end
If you have very large data or for other reasons want to do your calculations at the database instead of in Tableau by a table calc, then you use SQL windowing (aka analytical) queries instead via Tableau's custom SQL.
Please attach an example workbook and anything you tried along with the error you have.
This might not be useful if you cannot set User ID Field as a filter.
So, you can set
User ID
as a filter. Then following the steps mentioned in here will lead you to calculating difference between any two dates. Ideally if you select any one value in the filter, the calculated field from the link should give you the difference in the dates that you have in the transaction dates column.
I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)
I have an incident queue, consisting of a record number-string, the open time - datetime, and a close time-datetime. The records go back a year or so. What I am trying to get is a line graph displaying the queue volume as it was at 8PM each day. So if a ticket was opened before 8PM on that day or anytime on a previous day, but not closed as of 8, it should be contained in the population.
I tried the below, but this won't work because it doesn't really take into account multiple days.
If DATEPART('hour',[CloseTimeActual])>18 AND DATEPART('minute',[CloseTimeActual])>=0 AND DATEPART('hour',[OpenTimeActual])<=18 THEN 1
ELSE 0
END
Has anyone dealt with this problem before? I am using Tableau 8.2, cannot use 9 yet due to company license so please only propose 8.2 solutions. Thanks in advance.
For tracking history of state changes, the easiest approach is to reshape your data so each row represents a change in an incident state. So there would be a row representing the creation of each incident, and a row representing each other state change, say assignment, resolution, cancellation etc. You probably want columns to represent an incident number, date of the state change and type of state change.
Then you can write a calculated field that returns +1, -1 or 0 to to express how the state change effects the number of currently open incidents. Then you use a running total to see the total number open at a given time.
You may need to show missing date values or add padding if state changes are rare. For other analytical questions, structuring your data with one record per incident may be more convenient. To avoid duplication, you might want to use database views or custom SQL with UNION ALL clauses to allow both views of the same underlying database tables.
It's always a good idea to be able to fill in the blank for "Each record in my dataset represents exactly one _________"
Tableau 9 has some reshaping capability in the data connection pane, or you can preprocess the data or create a view in the database to reshape it. Alternatively, you can specify a Union in Tableau with some calculated fields (or similarly custom SQL with a UNION ALL clause). Here is a brief illustration:
select open_date as Date,
"OPEN" as Action,
1 as Queue_Change,
<other columns if desired>
from incidents
UNION ALL
select close_date as Date,
"CLOSE" as Action,
-1 as Queue_Change,
<other columns if desired>
from incidents
where close_date is not null
Now you can use a running sum for SUM(Queue_Change) to see the number of open incidents over time. If you have other columns like priority, department, type etc, you can filter and group as usual in Tableau. This data source can be in addition to your previous one. You don't have ta have a single view of the data for every worksheet in your workbook. Sometimes you want a few different connections to the same data at different levels of detail or for perspectives.
I'm trying to sum the data points by Months instead of individual days. The data is originating from an SQL Query so I'm thinking this may be the only way to do that. However, I would much rather do this inside of Report Builder 3.0. Any hints on how to do this?
For example, I want to see the number of tickets for the Months of December and January as only two seperate data points.
can you create a new field ( calculated perhaps) onthe dataset and group by that?
else you should be able to create an expression on the graph's group that groups by Month of a certain field.