I have a report that I've written and I understand how to create running totals and such, but need help creating a custom evaluation formula.
I have two levels of groups, first group is based upon a certain user, the next group is based upon transactions that user has been involved in. I have details hidden, and am only interested in the totals for a particular activity. This is working great, and totals are working properly but the problem is, each activity has a 'line number', which essentially can be the same as another activity (ie: two activities can have lines 1, 2, 3 contained within), so doing a distinctive total based upon a set of data isn't accurate because I only want it to be distinct based upon each individual recordset, and not globally.
The example is below... if I do a count on each record for this dataset, it comes out to 18 because there are duplicate line numbers on each... but if I do distinct, it only comes to 9 because of duplicate line numbers across multiple actives.
I guess what I need to know is how I can take the totals per detail group, and have them total up in my second footer properly. I assume it's going to take me compiling together a string including the activity number and line number, and then comparing them?
Here is an example of the data contained within the total groupings:
I figured this out on my own... turned out it was pretty simple. I converted my numeric values to text, and included a copy of the transaction id and the line id as my test value, and did distinct on that... Sometimes it just helps not staring the problem down.
Related
In forms collection, users provide me data. Most importantly in the data given is the date(s) they select (or sign up for, in my instance). I've created new tabs in the spreadsheet that are associated with the dates. I'm copying and pasting the exact information that they provide into the correct date(s) tab. Sometimes they select multiple dates. However, at this rate, I'm moving so slow and making many mistakes.
Is there a code that can duplicate what I'm doing to avoid the simple mistakes I'm making and save time?
Updated: here is the link to a mock sheet (I hope I did it right): Mock Sheet
(quick pic of what I'm working with)
https://imgur.com/a/NErREzU
Thank you for sharing your spreadsheet. Here is what I recommend:
1.) In your existing individual date sheets, select and delete whichever columns you will not be using; likewise, select and delete all rows beyond the maximum number of rows you might ever need per sheet. Keeping sheets trim is the professional standard, makes it easier to focus on the important information, and assures maximum processing speed.
2.) Delete everything from your existing individual date sheets, including the headers.
3.) Place the following formula into cell A1 of each sheet:
=FILTER({Sheet1!A:D,Sheet1!F:F},(ROW(Sheet1!A:A)=1)+(REGEXMATCH(Sheet1!E:E,"December 17, 2021")))
Adjust the date in quotation marks to match the sheet. It seems that all of your events may be happening from 5:00 - 9:00, so you don't need to include that if my assumption is correct. However, if there are different times on the same day (and will therefore be two different sheets for that day), you will need to include the start time in the formula's quotes as well, exactly as it will come in from the form (e.g., "December 17, 2021 Friday, 5:00pm"). The hyphen and end time seems it would be superfluous and can be left off.
4.) To create a new individual date sheet, simply duplicate an existing individual date sheet, rename the copy, color your header row as desired, and change the formula section between those quotes in the A1 formula (which will already be in place from the duplicate sheet.
5.) Since you will be adding further notes or other information in additional columns within each individual date sheet, be sure that you never delete information from Sheet1; otherwise, the formula-produces row information will collapse to eliminate deleted rows, and your manually entered data will not match up. For this reason, I recommend simply Hide-ing Sheet1 entirely, as this will cut down on the chance that someone may accidentally tamper with the data coming into that sheet. There is not reason to have Sheet1 showing, since all data will be included in some other sheet due to the formulas there.
The formula FILTERs in only the five target columns, which are included in the opening virtual array formed between the curly brackets { }; and it will include rows that match either one of two conditions (shown between sets of parentheses, where the plus symbol means 'OR'): either A.) the row is Row 1 (i.e., the header row information) or B.) the part in quotes can be found in Sheet1!E:E.
Crystal reports don't let me use a custom count formula field to filter which transactions to show in a manager report.
I'm creating a Crystal report that team leaders are supposed to take out to see on how many occasions their employees have reported in sick. A record is only supposed to show if that person has reported in sick 6 or more times the last 12 months.
The report shows a record (a page) for each employee belonging to the managers organisational unit. Below the employee information is a subreport where I show the transactions from the salary/time system. Using select expert, I have filtered out the transactions that is supposed to show. I have then created a database field that count which day was 12 months back from today, and filtered so that only the transactions falling into this period shows.
My last problem is that I only want to show the record that has a minimum of 6 such transactions during the period. I created a formula field named #Antal ("amount" in Swedish) that simply counts the distinct number of dates in the "from"-date for the salary transactions I'm showing (since a change of law 2019-01-01 we needed to create a new transaction type, so some of the occasions after 2019 may have two transactions referring to one sick leave, thus I'm counting the first day of the period instead), DistinctCount ({P_LSTAT.P_SXXX06})
Now, the subreport has a new column with Antal (amount) that counts the amount of the desired salary transaction. I then try to use the selection formula to only show records where {#Antal} >= 6 but I get the following error:
This formula cannot be used because it must be evaluated later
Is there any other (better) way of doing this, or am I simply missing something?
For your selection based on {#Antal} >= 6 you need to use the group selection formula, not the record selection formula. Record selection is used to select records which meet the criteria before reading in the data. Group selection is used to filter out entire groups of records based on summarised values, after the records have been read in and the summaries calculated - which sounds like exactly what you need here.
The value of a Formula Field is out of scope when the Select Expert is evaluated.
There is no process for calculating the value of a Formula Field before it is printed within the section of the report it is placed. The Select Expert is evaluated prior to any section of the report being printed, so at this time all Formula Fields are effectively Nothing.
Truth be told I'm not entirely sure what it is I'm trying to do here, well, that is I know what I want to achieve, but not how to go about it... so here's hoping you can help point me in the right direction!
I need to create a Crosstab report which has customers down the side, dates for columns, sales for figures. Simple enough.
Where it gets tricky is that they then want another row beneath the customers which singles out two customers, and their sales for one particular product.
They then want another row which will remove that figure from the total of the overall sales total for the first section (see example image).
I'm not really sure where to even start with this. I think I may need to use a query union, but every time I start I get kinda stuck... help!
That's not a crosstab. The summary calculations don't work. It's probably going to end up being three crosstabs formatted to look like a single crosstab. The first crosstab is everything down to the Total line. The last two lines are each crosstabs. Set the Size & Overflow, Padding, Margin, and other formatting properties so everything is layed out, bolded, and shaded the way you want.
You can create this as a crosstab
If you are okay with setting each customer as a fact
You can add them to a crosstab in any order you want
Then you can create a new data item and use a conditional statement that sets the metric to be the specific customers the consumer wants
For example,
IF([Customer] IN(?PrmCust) Then([Sales])Else(0)
For the revised total, unlock the report and replace them with layout calculations that take the total and reduce it by the amount of the selected customers
I will try to explain the problem on an abstract level first:
I have X amount of data as input, which is always going to have a field DATE. Before, the dates that came as input (after some process) where put in a table as output. Now, I am asked to put both the input dates and any date between the minimun date received and one year from that moment. If there was originally no input for some day between this two dates, all fields must come with 0, or equivalent.
Example. I have two inputs. One with '18/03/2017' and other with '18/03/2018'. I now need to create output data for all the missing dates between '18/03/2017' and '18/04/2017'. So, output '19/03/2017' with every field to 0, and the same for the 20th and 21st and so on.
I know to do this programmatically, but on powercenter I do not. I've been told to do the following (which I have done, but I would like to know of a better method):
Get the minimun date, day0. Then, with an aggregator, create 365 fields, each has that "day0"+1, day0+2, and so on, to create an artificial year.
After that we do several transformations like sorting the dates, union between them, to get the data ready for a joiner. The idea of the joiner is to do an Full Outer Join between the original data, and the data that is going to have all fields to 0 and that we got from the previous aggregator.
Then a router picks with one of its groups the data that had actual dates (and fields without nulls) and other group where all fields are null, and then said fields are given a 0 to finally be written to a table.
I am wondering how can this be achieved by, for starters, removing the need to add 365 days to a date. If I were to do this same process for 10 years intead of one, the task gets ridicolous really quick.
I was wondering about an XOR type of operation, or some other function that would cut the number of steps that need to be done for what I (maybe wrongly) feel is a simple task. Currently I now need 5 steps just to know which dates are missing between two dates, a minimun and one year from that point.
I have tried to be as clear as posible but if I failed at any point please let me know!
Im not sure what the aggregator is supposed to do?
The same with the 'full outer' join? A normal join on a constant port is fine :) c
Can you calculate the needed number of 'dublicates' before the 'joiner'? In that case a lookup configured to return 'all rows' and a less-than-or-equal predicate can help make the mapping much more readable.
In any case You will need a helper table (or file) with a sequence of numbers between 1 and the number of potential dublicates (or more)
I use our time-dimension in the warehouse, which have one row per day from 1753-01-01 and 200000 next days, and a primary integer column with values from 1 and up ...
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
You've identified you know how to do this programmatically and to be fair this problem is more suited to that sort of solution... but that doesn't exclude powercenter by any means, just feed the 2 dates into a java transformation, apply some code to produce all dates between them and for a record to be output for each. Java transformation is ideal for record generation
Ok... so you could override your source qualifier to achieve this in the selection query itself (am giving Oracle based example as its what I'm used to and I'm assuming your data in is from a table). I looked up the connect syntax here
SQL to generate a list of numbers from 1 to 100
SELECT (MIN(tablea.DATEFIELD) + levquery.n - 1) AS Port1 FROM tablea, (SELECT LEVEL n FROM DUAL CONNECT BY LEVEL <= 365) as levquery
(Check if the query works for you - haven't access to pc to test it at the minute)
I've been working on a report that uses subreports to print records.
The problem is: for the same information, there may be several records - i.e.:
There may be several records for the same product if those records differ in one single column. My goal here is to make a Record Selection Formula that says: "if that item is already shown, then don't show it once again."
I've tried to use (shared) variables for this, but can't seem to find the way, because of the evaluation time.
Selection formulas are already being used to apply some filtering criteria. The column that may differ between two ocurrences of the same record is not always the same, so using a simple Selection Formula is not likely to work...
Any suggestions?
Example:
I used the record selection to tell the report:
"Show me all the products according to these criteria (warehouse=parameter1 and category=parameter2 for example)".
But there may be more than one record for the same product of the same category and inside the same warehouse, if one or more fields are different (for example, different price, different lot)
I want not to display those repetitions.
Your approach is wrong... Record Selection Forumula is something that is applied at report level not on the row level or column level.
If you requirement is not to show the records that duplicates then you need to write the supress condition for those, As per your requirement apply supress condition to the rows or columns.