Filter based on column which is not in summarize "DAX" - olap-cube

Today I would like to generate result based on following "DAX" query but it return following error.
evaluate
(
filter
(
summarize
(
'Date',
'Date'[Numeric Month]
),
AND ('Date'[Numeric Month] >=(YEAR(TODAY())-1)* 100 + 1,'Date'[NumericDate] <=TODAY())
)
)
Error:
Query (11, 60) A single value for column 'Numeric Date' in table 'Date' cannot be determined. This can happen when a measure formula refers to a column that contains many values without specifying an aggregation such as min, max, count, or sum to get a single result.
I have tried various expect with
ADDCOLUMNS,SUMMARIZE
But nothing work for this. I just want ('Date'[Numeric Month]) in output.

I am not sure what you try to achieve here. Just want distinct Numeric Months as an output? If yes do this:
EVAULATE(
VALUES('Date'[Numeric Month])
)
Otherwise you should move your Date'[NumericDate] <=TODAY() to an iterator, so pass a FILTER as a first argument to summarize
evaluate
(
filter
(
summarize
(
filter('Date',
'Date'[NumericDate] <=TODAY()
)
'Date'[Numeric Month]
),
'Date'[Numeric Month] >=(YEAR(TODAY())-1)* 100 + 1
)
)

Related

Grouping + aggregation of itab with table comprehensions

Rather typical task but I'm stuck on doing it in a beautiful way.
For example, I need to find the last shipment for each vendor, i.e. to find delivery with the max date for the each vendor
VENDOR DELIVERY DATE
10 00055 01/01/2019
20 00070 01/19/2019
20 00088 01/20/2019
20 00120 11/22/2019
40 00150 04/01/2019
40 00200 04/10/2019
The result table to be populated
VENDOR DELIVERY DATE
10 00055 01/01/2019
20 00120 11/22/2019
40 00200 04/10/2019
I implemented this in a following way, via DESCENDING, which I find very ugly
LOOP AT itab ASSIGNING <wa> GROUP BY ( ven_no = <wa>-ven_no ) REFERENCE INTO DATA(vendor).
LOOP AT GROUP vendor ASSIGNING <ven> GROUP BY ( date = <vendor>-date ) DESCENDING.
CHECK NOT line_exists( it_vend_max[ ven_no = <ven>-ven_no ] ).
it_vend_max = VALUE #( BASE it_vend_max ( <ven> ) ).
ENDLOOP.
ENDLOOP.
Is there more elegant way to do this?
I also tried REDUCE
result = REDUCE #( vend_line = value ty_s_vend()
MEMBERS = VALUE ty_t_vend( )
FOR GROUPS <group_key> OF <wa> IN itab
GROUP BY ( key = <wa>-ven_no count = GROUP SIZE
ASCENDING
NEXT vend_line = VALUE #(
ven_no = <wa>-ven_no
date = REDUCE i( INIT max = 0
FOR m IN GROUP <group_key>
NEXT max = nmax( val1 = m-date
val2 = <wa>-date ) )
deliv_no = <wa>-deliv_no
MEMBERS = VALUE ty_s_vend( FOR m IN GROUP <group_key> ( m ) ) ).
but REDUCE selects max date from the whole table and it selects only flat structure, which is not what I want. However, in ABAP examples I saw samples where table-to-table reductions are also possible. Am I wrong?
Another thing I tried is finding uniques with WITHOUT MEMBERS but this syntax doesn't work:
it_vend_max = VALUE ty_t_vend( FOR GROUPS value OF <line> IN itab
GROUP BY ( <line>-ven_no <line>-ship_no )
WITHOUT MEMBERS ( lifnr = value
date = nmax( val1 = <line>-date
val2 = value-date ) ) ).
Any suggestion of what is wrong here or own elegant solution is appreciated.
If not too complex, I think it's best to use one construction expression, which shows that the goal of the expression is to initialize one variable and nothing else.
The best I could do to be the most performing and the shortest possible, but I can't make it elegant:
TYPES ty_ref_s_vend TYPE REF TO ty_s_vend.
result = VALUE ty_t_vend(
FOR GROUPS <group_key> OF <wa> IN itab
GROUP BY ( ven_no = <wa>-ven_no ) ASCENDING
LET max2 = REDUCE #(
INIT max TYPE ty_ref_s_vend
FOR <m> IN GROUP <group_key>
NEXT max = COND #( WHEN max IS NOT BOUND
OR <m>-date > max->*-date
THEN REF #( <m> ) ELSE max ) )
IN ( max2->* ) ).
As you can see I use a data reference (aux_ref_s_vend2) for a better performance, to point to the line which has the most recent date. It's theoretically faster than copying the bytes of the whole line, but it's less readable. If you don't have a huge table, there won't be a big difference between using an auxiliary data reference or an auxiliary data object.
PS: I could not test it because the question does not provide a MCVE.
Here is another solution if you really want to use REDUCE in the primary constructor expression (but it's not needed):
result = REDUCE ty_t_vend(
INIT vend_lines TYPE ty_t_vend
FOR GROUPS <group_key> OF <wa> IN itab
GROUP BY ( ven_no = <wa>-ven_no ) ASCENDING
NEXT vend_lines = VALUE #(
LET max2 = REDUCE ty_ref_s_vend(
INIT max TYPE ty_ref_s_vend
FOR <m> IN GROUP <group_key>
NEXT max = COND #( WHEN max IS NOT BOUND
OR <m>-date > max->*-date
THEN REF #( <m> ) ELSE max ) )
IN BASE vend_lines
( max2->* ) ) ).
what do you mean by elegant solution? Using GROUP or REDUCE with the "new" abap syntax is not making it elegant in any way, at least for me...
For me, coding that is easily understandable for everyone is elegant:
SORT itab BY vendor date DESCENDING.
DELETE ADJACENT DUPLICATES from itab COMPARING vendor.
Or if the example is more complex, a simple LOOP AT with IF or AT in it APPENDING aggregated lines to a new itab, will also solve it. Example here.

Using Group By and Max function together DAX

I have a table like below:
and I want to group by the date and name and then order by the MAX of rate. I use such an Expression:
NewTable =
CALCULATETABLE (
Table1,
GROUPBY ( Table1, Table1[Day], Table1[Name], "maxrate", MAX ( Table1[Rate] ) ))
But I receive an error. Can anyone explain how max and group by can be used together in DAX?
Just use SUMMARIZE function instead of GROUPBY:
New Table = SUMMARIZE (Table1, Table1[Day], Table1[Name], "maxrate', MAX(Table1[Rate]))
GROUPBY requires an iterator (such as MAXX). For example, let's say your table has rate and quantity, and your want to calculate max amount (rate * quantity). Then you should use GROUPBY:
New Table =
GROUPBY (
Table1,
Table1[Day],
Table1[Name],
"Max Amount", MAXX ( CURRENTGROUP (), Table1[Rate] * Table1[Quantity] )
)
Here, you first group table1 by day and name, and then iterate current group, to find max amount.
GROUPBY is very handy in some complicated cases, but your situation seems straightforward.

how to get min or max date on columns in mdx query

what mdx query logic could i implement for this example to get two rows in result set for hrid = 1 with 1/1/16 as min date(start) for first row where someattribut shows up on column with value 'A'
and 1/15/16 as min date(start) for second row where someattribute has value of 'B' and measure.whatevers has its aggregation for whatever data corresponds to that dimension row.
Im trying to just look at january 2016
everything ive tried i seem to get min date values of 1/1/1900 or both rows have value of 1/1/2016 or i get errors since i cant figure it out.
heres my mdx sample:
WITH MEMBER [Measures].[Start] as
(
-- min date that the combination of someattribute and hrid have certain
-- value withing the range of the where clause restriction of january 2016
SELECT {
[Measures].[Start]
, [Measures].[Whatevers]
} ON COLUMNS
, NON EMPTY {
[Agent].[HRID].children
* [Agent].[someAtribute].Members
} ON ROWS
FROM [RADM_REPORTING]
WHERE (
[Date].[Date View].[Month].&[201601]
)
this works, but it feels kind of like a hack or maybe it feels like its not robust, I am not familiar enough with mdx to be able to make that call.
WITH MEMBER [Measures].[Start] as
filter([Date].[Date View].[Month].&[201601].children,
[Measures].[Whatevers]).item(0).membervalue
Here is a potential direction that is more general:
WITH
MEMBER [Measures].[Start] AS
Min
(
(EXISTING
[Date].[Date].[Date].MEMBERS)
,IIF
(
[Measures].[Internet Sales Amount] = 0
,NULL
,[Date].[Date].CurrentMember.MemberValue
)
)
SELECT
NON EMPTY
{
[Measures].[Start]
,[Measures].[Internet Sales Amount]
} ON COLUMNS
,NON EMPTY
[Product].[Product Categories].[Product] ON ROWS
FROM [Adventure Works]
WHERE
[Date].[Calendar].[Calendar Year].&[2005];
It gives the following:

MDX query not accepting date values

I'm a SSAS newbie and i'm trying to query a cube to retrieve data against aome measure groups order by date. The date range i wish to specify in my query. The query I'm using is this:-
SELECT
{
[Measures].[Measure1],
[Measures].[Measure2],
[Measures].[Measure3]
}
ON COLUMNS,
NON EMPTY{
[Date].[AllMembers]
}
ON ROWS
FROM (SELECT ( STRTOMEMBER('2/23/2013', CONSTRAINED) :
STRTOMEMBER('3/1/2013', CONSTRAINED) ) ON COLUMNS
FROM [MyCube])
However it gives me the following error
Query (10, 16) The restrictions imposed by the CONSTRAINED flag in the STRTOMEMBER function were violated.
I tried removing the constrained keyword and then even strtomember function. But in each cases i got the following errors respectively
Query (10, 16) The STRTOMEMBER function expects a member expression for the 1 argument. A string or numeric expression was used.
and
*Query (10, 14) The : function expects a member expression for the 1 argument. A string or numeric expression was used.
*
I can understand from the last two errors that i need to include the constraint keyword. But can anyone tell me why this query wont execute?
The string that you pass as the member expression must be a fully-qualified member name, or resolve to one. Use the same format as you did in the SELECT.
For example:
STRTOMEMBER('[Date].[2/23/2013]', CONSTRAINED)
Edit: I just noticed the syntax of your range select looks wrong -- you need to use {...}, not (...).
SELECT {
STRTOMEMBER('2/23/2013', CONSTRAINED) :
STRTOMEMBER('3/1/2013', CONSTRAINED) }
Please execute below script.
Extract your date dimension attribute copy it by right clicking and paste it in STRTOMEMBER value.
It will works fine.
SELECT NON EMPTY { [Measures].[Internet Sales Amount] } ON COLUMNS
FROM ( SELECT ( STRTOMEMBER('[Date].[Date].&[20050701]') :
STRTOMEMBER('[Date].[Date].&[20061007]') ) ON COLUMNS
FROM [Adventure Works])
FROM ( SELECT (
STRTOMEMBER(#FromDateCalendarDate, CONSTRAINED) :
STRTOMEMBER(#ToDateCalendarDate, CONSTRAINED) ) ON COLUMNS

Conversion between timestamp to milliseconds in DB2

I have a column of datatype timestamp. Now I need to convert it to MiiliSeconds and put in another column. How can I do that.
the input is of the format 2011-10-04 13:54:50.455227 and the output needs to be 1317900719
There's a function called timestampdiff. Using it against January 1st 1970 would work otherwise but the function gives approximate results. If you want accuracy you will want to calculate the correct answer with something like
create function ts2millis(t timestamp)
returns bigint
return (
(
(bigint(year(t-1970))*bigint(31556926000))+
(bigint(month(t))*bigint(2629743000))+
(bigint(day(t))*bigint(86400000))+
(bigint(hour(t))*bigint(3600000))+
(bigint(minute(t))*bigint(60000))+
(bigint(second(t))*bigint(1000))+
(bigint(microsecond(t))/bigint(1000))
)
)
#
Your requested output is not miliseconds, but the equivalent to CLib localtime(), here's how to do it:
SELECT
86400*
(
DAYS(TIMESTAMP(v_timestamp))
-
DAYS(TIMESTAMP('1970-01-01-00:00:00'))
)
+
MIDNIGHT_SECONDS(timestamp(v_timestamp))
FROM
SYSIBM.SYSDUMMY1;
where v_timestamp is the variable or column to be calculated.