Create a static table in a tabular model - ssas-tabular

I have a static dimension to create in SSAS tabular model and without creating it in the datawarehouse which is populating the cube, the dimension is like below :
Status StatusID
In 1
Out 2
In (Exit Confirmed) 3
How can achieve that ?

You can easily use SELECTCOLUMNS() function in DAX to achieve this :
=SELECTCOLUMNS(
{("In", 1)
, ("Out", 2)
, ("In (Exit Confirmed)", 3)}
, "Status", [Value1]
, "StatusID", [Value2]
)

Related

How to create an inline table from an existing table

I have a table in qlik Sense loaded from the database.
Example:
ID
FRUIT
VEG
COUNT
1
Apple
5
2
Figs
10
3
Carrots
20
4
Oranges
12
5
Corn
10
From this I need to make a filter that will display all the Fruit/Veg records along with records from other joined tables, when selected.
The filter needs to be something like this :
|FRUIT_XXX|
|VEG_XXX |
Any help will be appreciated.
I do not know how to do it in qlicksense, but in SQL it's like this:
SELECT
ID
CASE WHEN FRUIT IS NULL THEN VEG ELSE FRUIT END as FruitOrVeg,
COUNT
FROM tablename
Not sure if its possible to be dynamic. Usually I solve these by creating a new field that combines the values from both fields into one field
RawData:
Load * Inline [
ID , FRUIT ,VEG , COUNT
1 , Apple , , 5
2 , Figs , , 10
3 , ,Carrots , 20
4 , Oranges , , 12
5 , ,Corn , 10
];
Combined:
Load
ID,
'FRUIT_' & FRUIT as Combined
Resident
RawData
Where
FRUIT <> ''
;
Concatenate
Load
ID,
'VEG_' & VEG as Combined
Resident
RawData
Where
VEG <> ''
;
This will create new table (Combined) which will be linked to the main table by ID field:
The new Combined field will have the values like this:
And the UI:
P.S. If further processing is needed you can join the Combined table to the RawData table. This way the Combined field will become part of the RawData table. To achieve this just extend the script a bit:
join (RawData)
Load * Resident Combined;
Drop Table Combined;

Does spark supports the below cascaded query?

I have one requirement to run some queries against some tables in the postgresql database to populate a dataframe. Tables are as following.
table 1 has the below data.
QueryID, WhereClauseID, Enabled
1 1 true
2 2 true
3 3 true
...
table 2 has the below data.
WhereClauseID, WhereClauseString
1 a>b
2 a>c
3 a>b && a<c
...
table 3 has the below data.
a, b, c, value
30, 20, 30, 100
20, 10, 40, 200
...
I want to query in the following way. For table 1, I want to pick up the rows when Enabled is true. Based on the WhereClauseID in each row, I want to pick up the rows in table 2. Based on the WhereClause condition picked up from table 2, I want to run the query using Where Clause to query table 3 to get the Value. Finally, I want to get all records in table 3 meeting the WhereClauses enabled in table 1.
I know I can go through table 1 row by row, and use the parameterized string to build sql query to query table 3. But I think the efficiency is very low to query row by row, especially if table 1 is big. Are there some better way to organize the query to improve the efficiency? Thanks a lot!
Depending on you usecase, but for pyspark databases, you'd might be able to solve it using the .when statement in pyspark.
Here is a suggestion.
import pyspark.sql.functions as F
tbl1 = spark.table("table1")
tbl3 = spark.table("table3")
tbl3 = (
tbl3
.withColumn("WhereClauseID",
## You can do some fancy parsing of your tbl2
## here if you want this to be evaluated programatically from your table2.
(
F.when( F.col("a") > F.col("b"), 1)
.when( F.col("a") > F.col("b"), 2)
.otherwise(-1)
)
)
)
tbl1_with_tbl_3 = tbl1.join(tbl3, "WhereClauseID", "left")

Cassandra Select Query for List and Frozen

I have user define type like
CREATE TYPE point ( pointId int, floor text);
And I have table like:
CREATE TABLE path (
id timeuuid,
val timeuuid,
PointList list<frozen <point>>,
PRIMARY KEY(id,val)
);
And have create index like
create index on path(PointList);
But the problem is I am not able to execute select query where PointList = [floor : "abc"].
I google for 2 hours but not able to find the hint.
I am using this query to execute select query
Select * from path where val = sdsdsdsdsds-dsdsdsd-dssds-sdsdsd and PointList contains {floor: 'eemiG8NbzdRCQ'};
I can see this data in my cassandra table but not able to get that data using above query.
I want select query where we can only use floor and val. Because we only have data for floor and val
I tried many different ways but nothing is working.
I would appreciate any kind of hint or help.
Thank you,
Frozen point means point type is frozen, you can't partially provide point value, you have to provide the full value of point
Example Query :
select * from path where pointlist CONTAINS {pointId : 1, floor : 'abc'};

Dynamic calculation in postgres

I am trying to do a dynamic calculation in postgres
as following
drop table if exists calculation_t;
create table calculation_t(
Id serial,
calculation_logic varchar(50))
Inserted a calculation logic :
insert into calculation_t(calculation_logic)
values ('$1-$2')
checking the calculation by providing dynamic numbers 2,1
select (2,1,calculation_logic::numeric) from calculation_t
throws an error : "ERROR: Can't serialize transient record types"
I am expecting to get the result
of '1' (because my calculation logic is $2-$1 from table caculation_t table for id 1, its 2-1) by doing the select
select (2,1,calculation_logic::numeric) from calculation_t
Also tried :
select replace(replace(calculation_logic, '$1', 2),'$2',1) from calculation_t
result :
"2-1"
But I needed 2-1 actual value from subtraction instead, that is 1

Query (2, 2) By default, a year level was expected. No such level was found in the cube.ytd Function In MDX Query Not Work

I use SSAS and SQL Server 2008 R2 and AdventureWorks Database.
I write this query :
Select
ytd([Date].[Calendar].[Calendar Quarter].[Q3 CY 2003]) on columns
From [Adventure Works]
and i get this result :
but when i execute this query :
Select
ytd([Date].[Fiscal].[Fiscal Quarter].[Q3 FY 2003]) on columns
From [Adventure Works]
i get this error :
Executing the query ...
Query (2, 2) By default, a year level was expected. No such level was found in the cube.
Execution complete
why this query not work ?
From the documentation : The Ytd function is a shortcut function for the PeriodsToDate [...] .Note that this function will not work when the Type property is set to FiscalYears. How about using the following instead :
Select
PeriodsToDate(
[Date].[Fiscal].[Fiscal Year],
[Date].[Fiscal].[Fiscal Quarter].[Q3 FY 2003]
) on columns
From [Adventure Works]