DAX Query to get a specific range of rows - range

How can I create a DAX query that retrieves rows from a given range in that order. Let's say I want the rows from row 1000 to row 2000. There is no unique id in my database. Should I add one, or is it possible without it?

If you can't distinguish a filter to create the subset of rows you are targeting then I would use a unique ID. I have not come across anything in DAX that allows to select rows in your powerpivot data set. If there isn't anything unique about the data you are targeting then I imagine you would need a unique ID.
i.e. I normally have column values I can filter with to target or create the subset of data I want to use.
I hope I am wrong and there is a way and look forward to someone posting a way.

Related

Spotfire - Filtering a Table by Values in a Calculated Column

I am trying to filter a table visualization of all of my data by looking to see if a Study Number contains Activity A. If a Study Number contains Activity A then I want to filter for all rows containing those Study Numbers even if the Activity is not A. See mock data below. In my real data set I have ~55,000 rows.
I have created a calculated to return Study Numbers if Activity= A but I am not sure where to go from there. Thanks for any help.
If(UniqueConcatenate([Activity]) OVER ([Study Number])~="A","Y","N")
Will give you this resulting column that you can then filter on (or you can use the formula as a Data Limiting Expression:

Tableau Fixed Calculation Summing too much data

I have volume data for specific customers. The customer names come from salesforce and the volume comes from another table. When I add each in tableau, i get a nice table that seems to be working.
We can see that there are 19 values ~500 My ultimate goal is to sum these based upon filters.
A way i discovered that i can do that is to use the syntax
{ FIXED [Account Id]: count([Volume]) }
But when i do that,
I get
When I change my function to count([volume]) i get a count of all joined rows ~250k
My question is how do i make this respect indivudal entries in the database and not all the joined values? If there was a way to do the sum for distinct timestamps in another field this would also work? Any other advice would be helpful from you tableau experts.
Thanks!
I think i got it. In the table of the database that i was trying to calculate there were 20 rows that needed to be calculated. When the data was joined in SF, it duplicated the rows. The trick here was to do the sum of the max for each primary key
SUM({ FIXED [Pk], [Name1] : MAX([Volume]) })

DAX: Distinct and then aggregate twice

I'm trying to create a Measure in Power BI using DAX that achieves the below.
The data set has four columns, Name, Month, Country and Value. I have duplicates so first I need to dedupe across all four columns, then group by Month and sum up the value. And then, I need to average across the Month to arrive at a single value. How would I achieve this in DAX?
I figured it out. Reply by #OscarLar was very close but nested SUMMARIZE causes problems because it cannot aggregate values calculated dynamically within the query itself (https://www.sqlbi.com/articles/nested-grouping-using-groupby-vs-summarize/).
I kept the inner SUMMARIZE from #OscarLar's answer changed the outer SUMMARIZE with a GROUPBY. Here's the code that worked.
AVERAGEX(GROUPBY(SUMMARIZE(Data, Data[Name], Data[Month], Data[Country], Data[Value]), Data[Month], "Month_Value", sumx(CURRENTGROUP(), Data[Value])), [Month_Value])
Not sure I completeley understood the question since you didn't provide example data or some DAX code you've already tried. Please do so next time.
I'm assuming parts of this can not (for reasons) be done using power query so that you have to use DAX. Then I think this will do what you described.
Create a temporary data table called Data_reduced in which duplicate rows have been removed.
Data_reduced =
SUMMARIZE(
'Data';
[Name];
[Month];
[Country];
[Value]
)
Then create the averaging measure like this
AveragePerMonth =
AVERAGEX(
SUMMARIZE(
'Data_reduced';
'Data_reduced'[Month];
"Sum_month"; SUM('Data_reduced'[Value])
);
[Sum_month]
)
Where Data is the name of the table.

can't figure out how to assign column from Dim table to measures group

DW is written in foreign language so i translated it on pictures.
It is about some facility where you can rent rooms and equipment.
In Fact table i have one measure column which represents number of reservations and it works as expected, problem is in DimPayment table i have columns PaymentSum and PaymentSumWDiscount which i would need to use as measures. i tried to do something in calculations tab but i only get null values
I can't figure out how to use columns PaymentSum and PaymentSumWDiscount
as measures
i tried to do something in calculations tab but i only get null values
JOIN the FactReserving table to the DimPayment table in the Data Source View and then you can access those two columns in the Measure group.
It looks like you want to use your Dimension attributes as measure.
One way to achieve this is to make your Dimension table as a Measure Group as well.
The following example works with the AdventureWorks sample to create a measure group for the Product table:
In Visual Studio / SSDT, open your Cube, create a new Measure Group and select your table. (for your scenario, it would be the DimPayment table).
Your dimension usage mapping for this would be 'Fact' as both your fact and dimension are in the same table.
Now, you should be able to use the measures normally.
All the numeric fields automatically added as measures:
Using it as a measure:

Insert multiple records into fact table based on fields in single record

I'm working in Pentaho 4.4.1-GA (Kettle / PDI). The database is Postgres.
I need to be able to insert multiple records into a fact table based on the fields that come from a single record. The single record contains fields:
productcode1, price1
productcode2, price2
productcode3, price3
...
productcode10,price10
So if there was a value for each of the 10 productcode / prices then I'd need to insert a total of 10 records into the fact table. If there were values for 4 of the combinations, then I'd need to insert 4 records into the fact table, etcetera. All field values for the fact records would be identical except for the PK (generated by sequence), product codes, and prices.
I figure that I need some type of looping construct which would let me check whether or not a value was present for each productx field, and if so, do an insert/update step on the fact table with the desired field values. I'm just not sure how to do this in Pentaho.
Any ideas? All suggestions are welcome :)
Thank You,
Rakesh
Could you give a sample input and output for your scenario??
From your example data I can infer that if there are 10 different product codes and only 4 product prices you want to have 4 records inserted into your table. Is that so?
Well for a start you can add a constant value of 1 to those records by filtering for NOT NULL and then use an Group BY Step to count the number of 1's. This would give you the count. BTW it would be helpful if you could provide more details on what columns you would be loading as there are ways to make a PDI transformation execute multiple times