Saiku Hierarchize function not returning expected cellset - rest

Can someone explain me why this mdx query executed against Saiku rest api does not return expected cellset? This query is executed as mdx type ThinQuery.
SELECT
{
[Location.CountryHierarchy].[Croatia]
,[Location.CountryHierarchy].[Serbia]
} ON COLUMNS
,Hierarchize
(
Union
(
{
[Product.ProductHierarchy].[Drinks]
,[Product.ProductHierarchy].[Food]
}
,[Product.ProductHierarchy].[Drinks].Children
)
) ON ROWS
FROM [Sales cube];
Expected output(tested with jpivot, pivot4j and Pentaho SchemaWorkbench/MDX explorer)
ExpectedResult
Actual result rendered on Android OLAP client am working on - just to be sure I also checked returned json from saiku server and really cells are missing.
Actual result

If you are executing using the execute endpoint and passing MDX or a Query Model in, you need to play with the properties section, try this:
"properties": {
"saiku.olap.query.automatic_execution": true,
"saiku.olap.query.nonempty": true,
"saiku.olap.query.nonempty.rows": true,
"saiku.olap.query.nonempty.columns": true,
"saiku.ui.render.mode": "table",
"saiku.olap.query.filter": true,
"saiku.olap.result.formatter": "flat",
"org.saiku.query.explain": true,
"org.saiku.connection.scenario": false,
"saiku.olap.query.drillthrough": true
}

Maybe try adding in the [ALL] member for Food to force the issue?
This was my AdvWrks mock-up:
SELECT
{
[Product].[Category].&[4]
,[Product].[Category].&[1]
} ON 0
,Hierarchize
(
Union
(
{
[Customer].[Customer Geography].[Country].&[Australia]
,[Customer].[Customer Geography].[Country].&[Canada]
}
,{
[Customer].[Customer Geography].[Country].&[Australia].Children
,[Customer].[Customer Geography].[Country].&[Canada].[All]
}
)
) ON 1
FROM [Adventure Works]
WHERE
[Measures].[Internet Sales Amount];
So in your scenario:
SELECT
{
[Location.CountryHierarchy].[Croatia]
,[Location.CountryHierarchy].[Serbia]
} ON COLUMNS
,Hierarchize
(
{
[Product.ProductHierarchy].[Drinks]
,[Product.ProductHierarchy].[Food]
,[Product.ProductHierarchy].[Drinks].Children
}
) ON ROWS
FROM [Sales cube];

Related

sequelize, query property on array of objects

I have looked extensively for an answer but could not find a simple solution.
I have a table that contains a column subscriptionHistory
The data can look like so:
[
{
"fromDate": "2023-01-24T10:11:57.150Z",
"userSubscribedToo": "EuuQ13"
},
{
"fromDate": "2022-01-24T10:11:57.150Z",
"tillDate": "2022-02-24T22:59:59.999Z",
"userSubscribedToo": "a4ufoAB"
}
]
I'm trying to find the records of the subscriptions.
In Mongo we do
'subscriptionHistory.$.userSubscribedToo' = 'a4ufoAB'
Nice and easy.
I'm using PostgreSQL and Sequelize,
The following doesn't work.
const totalEarnings = await SubscriptionToken.count({
where: {
'subscriptionHistory.$.userSubscribedToo': user.id,
},
});
Neither do any direct queries
SELECT *
FROM vegiano_dev."subscription-tokens"
WHERE "subscriptionHistory"->>'userSubscribedToo' = 'a4ufoAB'
--WHERE "subscriptionHistory" #> '{"userSubscribedToo": "a4ufoAB"}'
Not sure where to go now :-/
You can use a JSON path condition with the ## (exists) operator:
select *
from vegiano_dev."subscription-tokens"
where "subscriptionHistory" ## '$[*].userSubscribedToo == "a4ufoAB"'
The #> will work as welll, but because subscriptionHistory is an array, you need to use an array with that operator:
where "subscriptionHistory" #> '[{"userSubscribedToo": "a4ufoAB"}]'
This assumes that subscriptionHistory is defined as jsonb which it should be. If it's not, you need to cast it: "subscriptionHistory"::jsonb

Sort by json element at nested level for jsonb data - postgresql

I have below table in postgresql which stored JSON data in jsonb type of column.
CREATE TABLE "Trial" (
id SERIAL PRIMARY KEY,
data jsonb
);
Below is the sample json structure
{
"id": "000000007001593061",
"core": {
"groupCode": "DVL",
"productType": "ZDPS",
"productGroup": "005001000"
},
"plants": [
{
"core": {
"mrpGroup": "ZMTS",
"mrpTypeDesc": "MRP",
"supLeadTime": 777
},
"storageLocation": [
{
"core": {
"storageLocation": "H050"
}
},
{
"core": {
"storageLocation": "H990"
}
},
{
"core": {
"storageLocation": "HM35"
}
}
]
}
],
"discriminator": "Material"
}
These are the scripts for insert json data
INSERT INTO "Trial"(data)
VALUES(CAST('{"id":"000000007001593061","core":{"groupCode":"DVL","productType":"ZDPS","productGroup":"005001000"},"plants":[{"core":{"mrpGroup":"ZMTS","mrpTypeDesc":"MRP","supLeadTime":777},"storageLocation":[{"core":{"storageLocation":"H050"}},{"core":{"storageLocation":"H990"}},{"core":{"storageLocation":"HM35"}}]}],"discriminator":"Material"}' AS JSON))
INSERT INTO "Trial"(data)
VALUES(CAST('{"id":"000000000104107816","core":{"groupCode":"ELC","productType":"ZDPS","productGroup":"005001000"},"plants":[{"core":{"mrpGroup":"ZCOM","mrpTypeDesc":"MRP","supLeadTime":28},"storageLocation":[{"core":{"storageLocation":"H050"}},{"core":{"storageLocation":"H990"}}]}],"discriminator":"Material"}' AS JSON))
INSERT INTO "Trial"(data)
VALUES(CAST('{"id":"000000000104107818","core":{"groupCode":"DVK","productType":"ZDPS","productGroup":"005001000"},"plants":[{"core":{"mrpGroup":"ZMTL","mrpTypeDesc":"MRP","supLeadTime":28},"storageLocation":[{"core":{"storageLocation":"H050"}},{"core":{"storageLocation":"H990"}}]}]}' AS JSON))
If try to sort by at first level then it works
select id,data->'core'->'groupCode'
from "Trial"
order by data->'core'->'groupCode' desc
But when I try to sort by at nested level, below is the script then it doesn't work for me, I'm for sure I'm wrong for this script but don't know what is it ? Need assistant if someone knows how to order by at nested level for JSONB data.
select id,data->'plants'
from sap."Trial"
order by data->'plants'->'core'->'mrpGroup' desc
Need assistance for write a query for order by at nested level for JSONB data.
Below query works for me
SELECT id, data FROM "Trial" ORDER BY jsonb_path_query_array(data, '$.plants[*].core[*].mrpGroup') desc limit 100

Mimic COALESCE behaviour using groups of conditions when querying with Prisma client

I'm trying to replace a SQL query (Postgres) with a Prisma client query, and I'm struggling to implement this COALESCE logic:
COALESCE(date_1, date_2,'1970-01-01 00:00:00') <= CURRENT_TIMESTAMP
As fas as I can tell, Prisma client doesnt currently support COALESCE, so there's no simple way for me to do this. I think I have to group conditions together to replicate this. The logic would be along the lines of:
ANY of the following
date_1 <= CURRENT_TIMESTAMP
ALL of the following
date_1 IS NULL
date_2 <= CURRENT_TIMESTAMP
ALL of the following
date_1 IS NULL
date_2 IS NULL
But I cant work out how to group conditions like this using the Prisma client, because you cant have multiple AND's within an OR (or vis versa):
const records = await client.thing.findMany({
where: {
OR: {
date_1: { lte: new Date() },
AND: {
date_1: null,
date_2: { lte: new Date() }
},
AND: { // <--- Cant do this, because we've already used AND above
date_1: null,
date_2: null
}
}
},
});
We obviously cant repeat the same object key twice, so I cant have two AND statements at the same level, otherwise I get the error:
An object literal cannot have multiple properties with the same name
I'm probably missing something obvious, but I cannot work out how to achieve groups of conditions like this. If anyone can help I'd appreciate it!
I should mention that I'm using:
nodejs v14.19.1
prisma ^3.14.0
I was indeed missing something obvious. We can pass in an array of groups to an AND or OR group, like so:
const records = await client.thing.findMany({
where: {
OR: {
date_1: { lte: new Date() },
AND: [
{
date_1: null,
date_2: { lte: new Date() }
},
{
date_1: null,
date_2: null
}
]
}
},
});
https://www.prisma.io/docs/concepts/components/prisma-client/filtering-and-sorting#filter-conditions-and-operators

Creating an AND query on a list of items in Azure Cosmos

I'm building an application in Azure Cosmos and I'm having trouble creating a query. Using the dataset below, I want to create a query that only finds CharacterId "Susan" by searching for all characters that have the TraitId of "Athletic" and "Slim".
Here is my JSON data set
{
"characterId": "Bob",
"traits": [
{
"traitId": "Athletic",
"traitId": "Overweight"
}
],
},
{
"characterId": "Susan",
"traits": [
{
"traitId": "Athletic",
"traitId": "Slim"
}
],
},
{
"characterId": "Jerry",
"traits": [
{
"traitId": "Slim",
"traitId": "Strong"
}
],
}
]
The closest I've come is this query but it acts as an OR statement and what I want is an AND statement.
SELECT * FROM Characters f WHERE f.traits IN ("Athletic", "Slim")
Any help is greatly appreciated.
EDITED: I figured out the answer to this question. If anyone is interested this query gives the results I was looking for:
SELECT * FROM Characters f
WHERE EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Athletic')
AND EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Slim')
The answer that worked for me is to use EXISTS statements with SELECT statements that searched the traits list. In my program I can use StringBuilder to create a SQL statement that concatenates an AND EXISTS statement for each of the traits I want to find:
SELECT * FROM Characters f
WHERE EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Athletic')
AND EXISTS (SELECT VALUE t FROM t IN f.traits WHERE t.traitId = 'Slim')

Append in jsonb array and update existing record based on a key

I have a table media with a jsonb array field pictures that contains an empty array.
The idea is that every time I append a new json object, I want to switch the default attribute from any previous one from true to false.
Sample object:
{"file": "file.jpg", "default": true}
I accomplished that with 2 different queries.
One for inserting a new record:
update media
set pictures = jsonb_set(
pictures,
concat('{' , jsonb_array_length(pictures) , '}')::text[],
jsonb_build_object('file', 'somepicture.jpg', 'default', true)
)
where user_id = 8
And one for switching from default: true to default:false
update media
set pictures =
(
select
jsonb_agg(
case when value->>'default' = 'true' and value->>'file' != 'somepicture.jpg'
then value || jsonb_build_object('default', false)
else value
end
)
from jsonb_array_elements(media.pictures)
)
where user_id = 8
My final pictures array:
[
{
"file": "previouspicture.jpg",
"default": false
},
{
"file": "somepicture.jpg",
"default": true
}
]
How can I achieve the same thing, with only one query?
Use an extra jsonb_set() to update the previous object. You have to use coalesce() to be able to start with an empty array:
update media
set pictures = jsonb_set(
coalesce(
jsonb_set(
pictures,
array[(jsonb_array_length(pictures)-1)::text],
pictures->jsonb_array_length(pictures)-1 || '{"default": false}'
),
pictures
),
array[(jsonb_array_length(pictures))::text],
jsonb_build_object('file', 'filename.jpg', 'default', true)
)
where user_id = 8
returning *;
DbFiddle.