How to remove the duplicate elements in arrays of JSON file using Datawave transformations? - mule-studio

Here I want to remove duplicate accounts with the help of account_id for all accounts in this JSON file. There are 12 accounts in this JSON file need to duplicate accounts to make them unique. someone, Please help me resolve this.
Here total 12 accounts in input JSON payload
Input JSON:
{
"Accounts" : [
{
"account_id" :"c93c9cc0-26b4-11ed-a261-0242ac120002",
"account_Number" : 458554,
"account_name": "Ramu"
},
{
"account_id" : "e4b0efdc-26b5-11ed-a261-0242ac120002",
"accountNumber" : 741852,
"account_name": "Rajesh"
},
{
"account_id" : "027aa0f8-26b6-11ed-a261-0242ac120002",
"accountNumber" : 963258,
"account_name": "Harsha"
},
{
"account_id" : "09ebdd0c-26b6-11ed-a261-0242ac120002",
"accountNumber" : 852456,
"account_name": "Vamsi"
}
],
"address" : [
{
"street" : "XXXX",
"state" : "XXXX",
"country": "XXXX"
}
],
"Accounts" : [
{
"account_id" : "f4974e1e-26b5-11ed-a261-0242ac120002",
"accountNumber" : 246598,
"account_name": "Indu"
},
{
"account_id" :"2fa15b30-26b6-11ed-a261-0242ac120002",
"accountNumber" : 789789,
"account_name": "Suresh"
},
{
"account_id" : "c93c9cc0-26b4-11ed-a261-0242ac120002",
"accountNumber" : 458554,
"account_name": "Ramu"
},
{
"account_id" : "e4b0efdc-26b5-11ed-a261-0242ac120002",
"accountNumber" : 741852,
"account_name": "Rajesh"
}
],
"PhoneNumbers" :[
{
"phcountry" : "XXXX",
"phno" : "XXXX"
}
],
"Accounts" :[
{
"account_id" : "09ebdd0c-26b6-11ed-a261-0242ac120002",
"accountNumber" : 852456,
"account_name": "Vamsi"
},
{
"account_id" : "c93c9cc0-26b4-11ed-a261-0242ac120002",
"account_Number" : 458554,
"account_name": "Ramu"
},
{
"account_id" : "2ad45a96-b907-4e2e-ae90-4f429c3fc0e4",
"accountNumber" : 741852,
"account_name": "Savitri"
},
{
"account_id" :"f4974e1e-26b5-11ed-a261-0242ac120002",
"accountNumber" : 246598,
"account_name": "Indu"
}
]
}
Finally, I NEED the output of unique accounts along with other data present in this JSON file. thank you
Here total of 7 unique accounts in output payload
Expected output:
{
"Accounts" : [
{
"account_id" :"c93c9cc0-26b4-11ed-a261-0242ac120002",
"account_Number" : 458554,
"account_name": "Ramu"
},
{
"account_id" : "e4b0efdc-26b5-11ed-a261-0242ac120002",
"accountNumber" : 741852,
"account_name": "Rajesh"
},
{
"account_id" : "027aa0f8-26b6-11ed-a261-0242ac120002",
"accountNumber" : 963258,
"account_name": "Harsha"
},
{
"account_id" : "09ebdd0c-26b6-11ed-a261-0242ac120002",
"accountNumber" : 852456,
"account_name": "Vamsi"
}
],
"address" : [
{
"street" : "XXXX",
"state" : "XXXX",
"country": "XXXX"
}
],
"Accounts" : [
{
"account_id" : "f4974e1e-26b5-11ed-a261-0242ac120002",
"accountNumber" : 246598,
"account_name": "Indu"
},
{
"account_id" :"2fa15b30-26b6-11ed-a261-0242ac120002",
"accountNumber" : 789789,
"account_name": "Suresh"
}
],
"PhoneNumbers" :[
{
"phcountry" : "XXXX",
"phno" : "XXXX"
}
],
"Accounts" :[
{
"account_id" : "2ad45a96-b907-4e2e-ae90-4f429c3fc0e4",
"accountNumber" : 741852,
"account_name": "Savitri"
},
]
}

I will assume that the logic to remove duplicates is to remove any element from Accounts keys where their account_id is already present in an element in a previous Accounts key in the same JSON document. I created a script that looks for each Accounts key, finds the unique accounts_ids in previous elements and uses them to filter the current array. It doesn't not remove duplicates inside the same array, however that should be easy to add.
%dw 2.0
output application/json
fun unique_accounts_until_index(o, i)=flatten((o filterObject ((value, key, index) -> key as String == "Accounts" and index < i )).*Accounts).account_id distinctBy $
---
payload mapObject ((value, key, index) ->
(key): if (key as String == "Accounts")
value filter ( !( unique_accounts_until_index(payload, index) contains log("id", $.account_id)))
else value
)
Output for the input in the question:
{
"Accounts": [
{
"account_id": "c93c9cc0-26b4-11ed-a261-0242ac120002",
"account_Number": 458554,
"account_name": "Ramu"
},
{
"account_id": "e4b0efdc-26b5-11ed-a261-0242ac120002",
"accountNumber": 741852,
"account_name": "Rajesh"
},
{
"account_id": "027aa0f8-26b6-11ed-a261-0242ac120002",
"accountNumber": 963258,
"account_name": "Harsha"
},
{
"account_id": "09ebdd0c-26b6-11ed-a261-0242ac120002",
"accountNumber": 852456,
"account_name": "Vamsi"
}
],
"address": [
{
"street": "XXXX",
"state": "XXXX",
"country": "XXXX"
}
],
"Accounts": [
{
"account_id": "f4974e1e-26b5-11ed-a261-0242ac120002",
"accountNumber": 246598,
"account_name": "Indu"
},
{
"account_id": "2fa15b30-26b6-11ed-a261-0242ac120002",
"accountNumber": 789789,
"account_name": "Suresh"
}
],
"PhoneNumbers": [
{
"phcountry": "XXXX",
"phno": "XXXX"
}
],
"Accounts": [
{
"account_id": "2ad45a96-b907-4e2e-ae90-4f429c3fc0e4",
"accountNumber": 741852,
"account_name": "Savitri"
}
]
}

Related

How to query embedded array of objects based on conditions in mongodb

I have an array of objects embedded in a document and there are multiple such documents in an collection.
How to do I query those embedded array of objects with below conditions(based on the documents I have below).
First get objects whose "status" is "active"(status will not be in all the objects but only few)
Then get the "parent_user_id" of the above satisfied object and match it with the rest of the objects "parent_user_id" and get those objects
the result of the above conditions have to set instead of the original Array (i.e: "users") of objects in the output instead of all the objects present.
So if you take a look at the result am expecting there are 3 elements missing from the user array because those elements did not satisfy the above conditions.
Document I have in collection(there will be multiple document as such)
{
"_id" : ObjectId("63a8808652f40e1d48a3d1d7"),
"name" : "A",
"description" : null,
"users" : [
{
"id" : "63a8808c52f40e1d48a3d1da",
"owner" : "John Doe",
"purchase_date" : "2022-12-25,
"status" : "active",
"parent_user_id" : "63a8808c52f40e1d48a3d1da",
"recent_items": ["tomato",onion]
},
{
"id" : "63a880a552f40e1d48a3d1dc",
"owner" : "John Doe 1",
"purchase_date" : "2022-12-25,
"parent_user_id" : "63a8808c52f40e1d48a3d1da",
"recent_items": ["onion"]
},
{
"id" : "63a880f752f40e1d48assddd"
"owner" : "John Doe 2",
"purchase_date" : "2022-12-25,
"parent_user_id" : "63a8808c52f40e1d48a3d1da",
},
{
"id" : "63a880f752f40e1d48a3d207"
"owner" : "John Doe 11",
"dt" : "2022-12-25,
"status" : "inactive",
"parent_user_id" : "63a880f752f40e1d48a3d207",
},
{
"id" : "63a880f752f40e1d48agfmmb"
"owner" : "John Doe 112",
"dt" : "2022-12-25,
"status" : "active",
"parent_user_id" : "63a880f752f40e1d48agfmmb",
"recent_items": ["tomato"]
}
{
"id" : "63a880f752f40e1d48agggg"
"owner" : "John SS",
"dt" : "2022-12-25,
"status" : "inactive",
"parent_user_id" : "63a880f752f40e1d48agggg",
}
{
"id" : "63a880f752f40e1d487777"
"owner" : "John SS",
"dt" : "2022-12-25,
"parent_user_id" : "63a880f752f40e1d48agggg",
}
]
}
Result am expecting
{
"_id" : ObjectId("63a8808652f40e1d48a3d1d7"),
"name" : "A",
"description" : null,
"users" : [
{
"id" : "63a8808c52f40e1d48a3d1da",
"owner" : "John Doe",
"purchase_date" : "2022-12-25,
"status" : "active",
"parent_user_id" : "63a8808c52f40e1d48a3d1da",
"recent_items": ["tomato",onion]
},
{
"id" : "63a880a552f40e1d48a3d1dc",
"owner" : "John Doe 1",
"purchase_date" : "2022-12-25,
"parent_user_id" : "63a8808c52f40e1d48a3d1da",
},
{
"id" : "63a880f752f40e1d48assddd"
"owner" : "John Doe 2",
"purchase_date" : "2022-12-25,
"parent_user_id" : "63a8808c52f40e1d48a3d1da",
},
{
"id" : "63a880f752f40e1d48agfmmb"
"owner" : "John Doe 112",
"dt" : "2022-12-25,
"status" : "active",
"parent_user_id" : "63a880f752f40e1d48agfmmb",
"recent_items": ["tomato"]
}
]
}
i would use some $filter stages as follows :
db.collection.aggregate([
{
$addFields: {
users_matched: {
$filter: {
input: "$users",
as: "user",
cond: {
$eq: [
"active",
"$$user.status"
],
},
},
},
},
},
{
$set: {
users: {
$filter: {
input: "$users",
as: "user",
cond: {
$in: [
"$$user.parent_user_id",
"$users_matched.id"
],
},
},
},
},
},
{
$unset: "users_matched"
}
])
You can check for yourself on mongoplayground https://mongoplayground.net/p/SrpsWb4v21x
EDIT TO ANSWER THE SECOND QUESTION:
You could fix your tomato problem as follows :
db.collection.aggregate([
{
$addFields: {
active_users: {
$filter: {
input: "$users",
as: "user",
cond: {
$eq: [
"active",
"$$user.status"
],
},
},
},
tomato_users: {
$filter: {
input: "$users",
as: "user",
cond: {
$in: [
"tomato",
{
"$ifNull": [
"$$user.recent_items",
[]
]
}
],
},
},
}
},
},
{
$set: {
users: {
$filter: {
input: "$users",
as: "user",
cond: {
$and: [
{
$in: [
"$$user.parent_user_id",
"$active_users.id"
],
},
{
$in: [
"$$user.parent_user_id",
"$tomato_users.parent_user_id"
],
}
]
},
},
},
},
},
{
$unset: [
"active_users",
"tomato_users"
]
}
])
See on mongoplayground https://mongoplayground.net/p/mb21UT475yt

MongoDB doesn't group properly

I am using mongoDB with .NET and for over a year, the same query worked properly.
For a single date range - 04.06.22. - 05.06.22. query malfunctioned and didn't group only one product by ID, so I am getting 2 products with the same ID.
Query (will delete unnecessary params, to make it easier to understand):
.FilterByCustomer(filter.CustomerId)
.FilterByTerminals(filter.Terminals)
.FilterByDateRange("FromLocalDateTime", "ToLocalDateTime", filter.LocalDateRange)
.Unwind("SalesForSingleProductReports", false)
.Unwind("SalesForSingleProductReports.RevenueAndSalesPerCurrency", false)
.Group(
#" _id : {
ProductInfo: '$SalesForSingleProductReports.ProductInfo',
CurrencyInfo: '$SalesForSingleProductReports.RevenueAndSalesPerCurrency.CurrencyInfo'
},
RevenueAndSalesPerCurrency: {
$push: {
Revenue: {$sum: '$SalesForSingleProductReports.RevenueAndSalesPerCurrency.Revenue'},
}
}
}")
.Group(#"_id : '$_id.ProductInfo',
RevenueAndSalesPerCurrency: {$push: {
CurrencyInfo: '$_id.CurrencyInfo',
Revenue: {$sum: '$RevenueAndSalesPerCurrency.Revenue'},
}")
.Project(#"_id:0,
ProductInfo:'$_id',
'RevenueAndSalesPerCurrency': 1")
.Compile();
The result that I am getting is multiple products, but one is duplicated and has different revenue values, but same ID:
[{
"productInfo": {
"productId": "id-111",
"productName": null,
"productInternalCode": "internal-111",
"productCategoryId": "prod-cat-111",
"productCode": null,
"brand": null,
"countryCode": null
},
"revenueAndSalesPerCurrency": [
{
"currencyInfo": {
"currencyId": "curr-Id",
"code": "EUR",
"symbol": "€"
},
"revenue": 1680,
}
]
},
{
"productInfo": {
"productId": "id-111",
"productName": null,
"productInternalCode": "internal-111",
"productCategoryId": "prod-cat-111",
"productCode": null,
"brand": null,
"countryCode": null
},
"revenueAndSalesPerCurrency": [
{
"currencyInfo": {
"currencyId": "curr-id",
"code": "EUR",
"symbol": "€"
},
"revenue": 3080,
}
]
}
]
Here's one record from the collection on which actions are done:
"_id" : ObjectId("id-string"),
"CreatedAt" : ISODate("2021-11-10T10:14:10.116Z"),
"UpdatedAt" : ISODate("2021-11-10T10:14:10.116Z"),
"CreatedByUserId" : null,
"UpdatedByUserId" : null,
"Version" : null,
"IsDeleted" : false,
"ReportType" : "AggregatedSingleProductSalesReportForTerminalAndDate",
"ReportId" : "report-id,
"CustomerId" : "cust-id",
"FromUtcDateTime" : ISODate("2021-11-01T23:00:00.000Z"),
"ToUtcDateTime" : ISODate("2021-11-02T22:59:59.999Z"),
"FromLocalDateTime" : ISODate("2021-11-02T00:00:00.000Z"),
"ToLocalDateTime" : ISODate("2021-11-02T23:59:59.999Z"),
"TransactionIds" : [
"trans-id-1",
"trans-id-2",
],
"TerminalId" : "terminal-id",
"LocalDate" : "2021-11-02",
"SalesForSingleProductReports" : [
{
"ProductInfo" : {
"ProductId" : "prod-id",
"ProductName" : null,
"ProductInternalCode" : "internal-string",
"ProductCategoryId" : "prod-category-id"
},
"RevenueAndSalesPerCurrency" : [
{
"CurrencyInfo" : {
"CurrencyId" : "euro-curr",
"Code" : "GBP",
"Symbol" : "£"
},
"Revenue" : 180,
}
]
},
{
"ProductInfo" : {
"ProductId" : "prod-id-2",
"ProductName" : null,
"ProductInternalCode" : "internal-string-2",
"ProductCategoryId" : "prod-category-id-2"
},
"RevenueAndSalesPerCurrency" : [
{
"CurrencyInfo" : {
"CurrencyId" : "euro-curr",
"Code" : "GBP",
"Symbol" : "£"
},
"Revenue" : 90,
}
]
}
]
}```

Compare two values in document and display the common values in each row in result

I have documents in which am storing two main objects scan and result, I want the output such that in result I get separate row for each match of scan.location and result.location.
Date:
{
"_id" : ObjectId("6024d02a1bf5152b44baf521"),
"scan" : {
"customerId" : "e2565eac-2086-48fc-ba18-dbce74602e22",
"customerGenderSelection" : "F",
"scanQuery" : {
"service" : "MA",
"date" : "08-Nov-2020",
"locations" : [
{
"locationId" : "0023",
"locationName" : "dobson"
},
{
"locationId" : "0001",
"locationName" : "shea"
}
]
},
"sessionId" : "70511849-97f8-4318-a96e-bb3a573c52ea",
},
"result" : [
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "11:00",
"locationId" : "0023"
},
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "12:00",
"locationId" : "0023"
},
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "12:30",
"locationId" : "0023"
},
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "13:00",
"locationId" : "0001"
}
]
}
I want result where scan.scanQuery.locations.locationId == result.locationId. So expecting a result in the below format:
Expected Response:
{
"scan" : {
"customerId" : "e2565eac-2086-48fc-ba18-dbce74602e22",
"customerGenderSelection" : "F",
"scanQuery" : {
"service" : "MA",
"date" : "08-Nov-2020",
"locations" : [
{
"locationId" : "0023",
"locationName" : "dobson"
}
]
},
"sessionId" : "70511849-97f8-4318-a96e-bb3a573c52ea",
},
"result" : [
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "11:00",
"locationId" : "0023"
},
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "12:00",
"locationId" : "0023"
},
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "12:30",
"locationId" : "0023"
}
]
},
{
"scan" : {
"customerId" : "e2565eac-2086-48fc-ba18-dbce74602e22",
"customerGenderSelection" : "F",
"scanQuery" : {
"service" : "MA",
"date" : "08-Nov-2020",
"locations" :
{
"locationId" : "0001",
"locationName" : "shea"
}
]
},
"sessionId" : "70511849-97f8-4318-a96e-bb3a573c52ea",
},
"result" : [
{
"employeeId" : "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time" : "13:00",
"locationId" : "0001"
}
]
}
Any suggestions or ideas would be helpful.
Try this:
db.locations.aggregate([
{
$unwind: "$scan.scanQuery.locations"
},
{
$addFields: {
result: {
$filter: {
input: "$result",
as: "item",
cond: {
$eq: ["$scan.scanQuery.locations.locationId", "$$item.locationId"]
}
}
}
}
}
]);
Output:
{
"_id": ObjectId("6024d02a1bf5152b44baf521"),
"scan": {
"customerId": "e2565eac-2086-48fc-ba18-dbce74602e22",
"customerGenderSelection": "F",
"scanQuery": {
"service": "MA",
"date": "08-Nov-2020",
"locations": {
"locationId": "0023",
"locationName": "dobson"
}
},
"sessionId": "70511849-97f8-4318-a96e-bb3a573c52ea"
},
"result": [
{
"employeeId": "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time": "11:00",
"locationId": "0023"
},
{
"employeeId": "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time": "12:00",
"locationId": "0023"
},
{
"employeeId": "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time": "12:30",
"locationId": "0023"
}
]
},
{
"_id": ObjectId("6024d02a1bf5152b44baf521"),
"scan": {
"customerId": "e2565eac-2086-48fc-ba18-dbce74602e22",
"customerGenderSelection": "F",
"scanQuery": {
"service": "MA",
"date": "08-Nov-2020",
"locations": {
"locationId": "0001",
"locationName": "shea"
}
},
"sessionId": "70511849-97f8-4318-a96e-bb3a573c52ea"
},
"result": [
{
"employeeId": "224f55b2-27bc-4e34-8249-a8d201540ac2",
"time": "13:00",
"locationId": "0001"
}
]
}

Merge documents with its nested arrays and their nested arrays

I'm trying to create a query with the aggregation framework, but I could not get the result I want.
I have a collection of resellers, each reseller have a list of clients, each clients have a list of members, the structure is as below :
[
{
"userID" : "xxx",
"userType" : "RESELLER",
"clients" : [
{
"userID" : "xxx",
"userType" : "CLIENT",
"members" : [
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
}
]
},
{
"userID" : "xxx",
"userType" : "CLIENT",
"members" : [
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
}
]
}
]
},
{
"userID" : "xxx",
"userType" : "RESELLER",
"clients" : [
{
"userID" : "xxx",
"userType" : "CLIENT",
"members" : [
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
}
]
},
{
"userID" : "xxx",
"userType" : "CLIENT",
"members" : [
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
}
]
}
]
}
]
The result I want to get is :
[
{
"userID" : "xxx",
"userType" : "RESELLER"
},
{
"userID" : "xxx",
"userType" : "RESELLER"
},
{
"userID" : "xxx",
"userType" : "CLIENT"
},
{
"userID" : "xxx",
"userType" : "CLIENT"
},
{
"userID" : "xxx",
"userType" : "CLIENT"
},
{
"userID" : "xxx",
"userType" : "CLIENT"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
},
{
"userID" : "xxx",
"userType" : "MEMBER"
}
]
I did so many try but I did not to get this result.
The closest solution that I did is :
db.resellers.aggregate([
{
$unwind: "$clients"
},
{
$project: {
_id : 0,
teamMembers : "$clients.members"
}
},
{
$unwind: "$members"
},
{
$project: {
_id : 0,
userID : "$members.userID",
type : "$members.type"
}
}
]).pretty()
This solution returns only the list of members, so what i have to do to get a list containing all the resellers, the clients and the members all together?
You can use $reduce with $concatArrays to flatten your data structure and then run $unwind with $replaceRoot to get single member per document:
db.collection.aggregate([
{ "$project": {
"members": {
"$concatArrays": [
[{ "userID": "$userID", "userType": "$userType" }],
{ "$reduce": {
"input": "$clients",
"initialValue": [],
"in": {
"$concatArrays": [
"$$value",
[{ "userID": "$$this.userID", "userType": "$$this.userType" }],
"$$this.members"
]
}
}}
]
}
}},
{ "$unwind": "$members" },
{ "$replaceRoot": { "newRoot": "$members" }}
])
Mongo Playground
Well, you can do this in the $project stage.
[
{
"$project": {
"members": {
"$reduce": {
"input": {
"$map": {
"input": "$clients",
"in": {
"$concatArrays": [
[
{
"userID": "$userID",
"userType": "$userType"
},
{
"userID": "$$this.userID",
"userType": "$$this.userType"
}
],
"$$this.members"
]
}
}
},
"initialValue": [
],
"in": {
"$concatArrays": [
"$$this",
"$$value"
]
}
}
}
}
}
]
Playground

Druid count differ when we run same query on daliy and row data

When run query to ABS Data Source in Druid.I got some count but that differ when same query run with ABS_DAILY data source. And we make ABS_DAILY from ABS.
{
"queryType" : "groupBy",
"dataSource" : "ABS",
"granularity" : "all",
"intervals" : [ "2018-07-12T00:00:00.000Z/2018-07-13T00:00:00.000Z" ],
"descending" : "false",
"aggregations" : [ {
"type" : "count",
"name" : "COUNT",
"fieldName" : "COUNT"
} ],
"postAggregations" : [ ],
"dimensions" : [ "event_id" ]
}
Below json used for submit Daily job for druid which will create segments for ABS_DALIY for specific time
{
"spec": {
"ioConfig": {
"firehose": {
"dataSource": "ABS",
"interval": "2018-07-12T00:00:00.000Z/2018-07-13T00:00:00.000Z",
"metrics": null,
"dimensions": null,
"type": "ingestSegment"
},
"type": "index"
},
"dataSchema": {
"granularitySpec": {
"queryGranularity": "day",
"intervals": [
"2018-07-12T00:00:00.000Z/2018-07-13T00:00:00.000Z"
],
"segmentGranularity": "day",
"type": "uniform"
},
"dataSource": "ABS_DAILY",
"metricsSpec": [],
"parser": {
"parseSpec": {
"timestampSpec": {
"column": "server_timestamp",
"format": "dd MMMM, yyyy (HH:mm:ss)"
},
"dimensionsSpec": {
"dimensionExclusions": [
"server_timestamp"
],
"dimensions": []
},
"format": "json"
},
"type": "string"
}
}
},
"type": "index"
}
I quired to ABS_DAILY with below it return different result than ABS Count. Which it should not.
{
"queryType" : "groupBy",
"dataSource" : "ERS_DAILY",
"granularity" : "all",
"intervals" : [ "2018-07-12T00:00:00.000Z/2018-07-13T00:00:00.000Z" ],
"descending" : "false",
"aggregations" : [ {
"type" : "count",
"name" : "COUNT",
"fieldName" : "COUNT"
} ],
"postAggregations" : [ ],
"dimensions" : [ "event_id" ]
}
You are counting rows of daily aggregates.
To summarize pre-aggregated counts you now need to sum the count column (see type)
{
"queryType" : "groupBy",
"dataSource" : "ERS_DAILY",
"granularity" : "all",
"intervals" : [ "2018-07-12T00:00:00.000Z/2018-07-13T00:00:00.000Z" ],
"descending" : "false",
"aggregations" : [ {
"type" : "longSum",
"name" : "COUNT",
"fieldName" : "COUNT"
} ],
"postAggregations" : [ ],
"dimensions" : [ "event_id" ]
}