In iOS, I have an array of dictionary
dishs = (
{
amount = 1;
itemId = 576d315a7d24aa5085fe0dc3;
},
{
amount = 2;
itemId = 57666d75c8f2cb97bb07e50d;
},
{
amount = 1;
itemId = 57666d75c8f2cb97bb07e50c;
}
);
I send it by AFN like
parameters = #{#"token":token, #"dishs":items};
[manager PUT:[url absoluteString] parameters:parameters success:^(NSURLSessionDataTask * _Nonnull task, id _Nullable responseObject)
But in Node.js I get:
[ { amount: [ '1', '2', '1' ],
itemId:
[ '576d315a7d24aa5085fe0dc3',
'57666d75c8f2cb97bb07e50d',
'57666d75c8f2cb97bb07e50c' ] } ]
If the array only have one item, then node will get json correctly.
iOS:
dishs = (
{
amount = 1;
itemId = 576d315a7d24aa5085fe0dc3;
}
);
Node.js:
[ { amount: '1', itemId: '576d315a7d24aa5085fe0dc3' } ]
If I use postman to send request Node also can get data in correct format:
[ { itemId: '57666d75c8f2cb97bb07e50b', amount: 3 },
{ itemId: '57666d75c8f2cb97bb07e50a', amount: 5 } ]
I am so confused that why json in Node.js be parsed in this format and how to fix it?
Related
So let's say I have a table of transaction data that is shaped like so:
{
tokenAddress: string; // Address of token
to: string; // Address of wallet receiving token
from: string; // Address of wallet sending token
quantity: number; // Number of tokens sent
}
I'd like to perform an aggregation that transforms this data like so
{
tokenAddress: string; // Address of token
walletAddress: string; // Each wallet has a row
quantity: number; // Number of tokens in wallet
}
I am doing this currently by pulling the flat transaction data out and performing a pretty complex reduce in the application code.
export const getAddressesTokensTransferred = async (
walletAddresses: string[]
) => {
const collection = await getCollection('tokenTransfers');
const result = await collection
.find({
$or: [
{ from: { $in: walletAddresses } },
{ to: { $in: walletAddresses } },
],
})
.toArray();
return result.reduce((acc, { tokenAddress, quantity, to, from }) => {
const useTo = walletAddresses.includes(to);
const useFrom = walletAddresses.includes(from);
let existingFound = false;
for (const existing of acc) {
if (existing.tokenAddress === tokenAddress) {
if (useTo && existing.walletAddress === to) {
existingFound = true;
existing.quantity += quantity;
break;
} else if (useFrom && existing.walletAddress === from) {
existingFound = true;
existing.quantity -= quantity;
break;
}
}
}
if (!existingFound) {
if (useTo) {
acc.push({ tokenAddress, walletAddress: to, quantity });
}
if (useFrom) {
acc.push({
tokenAddress,
walletAddress: from,
quantity: quantity * -1,
});
}
}
return acc;
}, [] as { tokenAddress: string; walletAddress: string; quantity: number }[]);
};
I feel like there MUST be a better way to do this within MongoDB, but I'm just not experienced enough with it to know how. Any help is greatly appreciated!
Edit - Adding some sample documents:
Input walletAddresses:
[
'0x72caf7c477ccab3f95913b9d8cdf35a1caf25555',
'0x5b6e57baeb62c530cf369853e15ed25d0c82a866'
]
Result from initial find:
[
{
to: "0x123457baeb62c530cf369853e15ed25d0c82a866",
from: "0x4321f7c477ccab3f95913b9d8cdf35a1caf25555",
quantity: 5,
tokenAddress: "0x12129ec85eebe10a9b01af64e89f9d76d22cea18",
},
{
to: "0x123457baeb62c530cf369853e15ed25d0c82a866",
from: "0x0000000000000000000000000000000000000000",
quantity: 5,
tokenAddress: "0x12129ec85eebe10a9b01af64e89f9d76d22cea18"
},
{
to: "0x4321f7c477ccab3f95913b9d8cdf35a1caf25555",
from: "0x0000000000000000000000000000000000000000",
quantity: 5,
tokenAddress: "0x12129ec85eebe10a9b01af64e89f9d76d22cea18"
},
{
to: "0x4321f7c477ccab3f95913b9d8cdf35a1caf25555",
from: "0x0000000000000000000000000000000000000000",
quantity: 5,
tokenAddress: "0x12129ec85eebe10a9b01af64e89f9d76d22cea18"
}
]
This is a small sample with just two wallets (the other 0x000, and any others not in the walletAddresses array can be discarded essentially), and a single token (there would be many, we would want a row for each of them that have a transaction with the wallets)
The desired result would be
[
{
tokenAddress: '0x86ba9ec85eebe10a9b01af64e89f9d76d22cea18',
walletAddress: '0x72caf7c477ccab3f95913b9d8cdf35a1caf25555',
quantity: 5
},
{
tokenAddress: '0x86ba9ec85eebe10a9b01af64e89f9d76d22cea18',
walletAddress: '0x5b6e57baeb62c530cf369853e15ed25d0c82a866',
quantity: 10
}
]
One option is to "duplicate" the transactions and keep them temporarily per walletAddress. This way we can group them by walletAddress:
db.collection.aggregate([
{
$project: {
data: [
{walletAddress: "$from",
quantity: {$multiply: ["$quantity", -1]},
tokenAddress: "$tokenAddress"},
{walletAddress: "$to",
quantity: "$quantity",
tokenAddress: "$tokenAddress"}
]
}
},
{$unwind: "$data"},
{$group: {
_id: "$data.walletAddress",
quantity: {$sum: "$data.quantity"},
tokenAddress: {$first: "$data.tokenAddress"}
}},
{$match: {
_id: {$in: [
"0x4321f7c477ccab3f95913b9d8cdf35a1caf25555",
"0x123457baeb62c530cf369853e15ed25d0c82a866"
]
}
}}
])
See how it works on the playground example
I am displaying messages in infinite scroll. In other words, I load set by set and display them using this function:
const getXNumberOfMessages = async (
user_id,
conv_id,
page,
results_per_page
) => {
results_per_page = parseInt(results_per_page);
const conversation = await Conversation.findOne({
_id: conv_id,
members: { $in: [user_id] },
}).populate({
path: "messages",
options: {
skip: results_per_page * page,
limit: results_per_page,
},
});
let messages = conversation.messages.map((message) => {
message.text = encryptionServices.decryptPrivateMessage(message.text);
return message;
});
return messages;
};
The problem is that messages as you know get loaded from the last set until the first set.
Whereas that function does the opposite.
It load the the messages from the first set until the last set.
Any idea how to achieve my goal?
So I managed to solve this like this:
const getXNumberOfMessages = async (
user_id,
conv_id,
page,
results_per_page
) => {
results_per_page = parseInt(results_per_page);
const conversation = await Conversation.findOne({
_id: conv_id,
members: { $in: [user_id] },
}).populate({
path: "messages",
options: {
skip: results_per_page * page,
limit: results_per_page,
sort: { date: -1 },
},
});
let messages = conversation.messages.map((message) => {
message.text = encryptionServices.decryptPrivateMessage(message.text);
return message;
});
messages.reverse();
return messages;
};
I have a simplified order model that looks likes:
order = {
_id: 1,
productGroups:[
{ productId: 1, qty: 3 },
{ productId: 2, qty: 5 }
],
cancels:[]
}
Now I have an api that cancels part of the order.
The request could be something like cancel {productId:1, qty:2}, {productId:2, qty:2} from order where orderId:1. The result should be
order = {
_id: 1,
productGroups:[
{ productId: 1, qty: 2 },
{ productId: 2, qty: 2 }
],
cancels:[
{
productGroups:[
{ productId: 1, qty: 1 },
{ productId: 2, qty: 3 }
]
}
]
}
let order = await Order.findOneAndUpdate(
{
_id: id
},
{
$inc: {
'productGroups.$.qty': cancelQty //this is the part that needs fixing. how do I get different cancelQty according to productId
},
$push: {
cancels: {
products: cancelProductGropus
}
}
},
{ new: true }
);
Now I know I can just findOne, update the model with javascript, and then .save() the model. But if possible I would like to do this update in one go. Or if it is not possible, can I fix the schema so that I can do such update in a single request?
I must iterate over array, find correspondent objects in other array an merge the result in a object.
Assume I have three arrays
var users = [
{ name: "A", type: 2, level: 1 },
{ name: "B", type: 1, level: 2 }
]
var types = [
{ description: "Type 1", id: 1 },
{ description: "Type 2", id: 2 }
]
var levels = [
{ description: "Level 1", id: 1 },
{ description: "Level 2", id: 1 }
]
I want to have following result:
var users = [
{ name: "A", type: 2, level: 1, levelDescription: "Level 1", typeDescription: "Type 2" },
{ name: "B", type: 1, level: 2, levelDescription: "Level 2", typeDescription: "Type 1" }
]
I know I can achieve it like that
var usersObservable = RX.Observable.fromArray(users);
var typesObservable = Rx.Observable.fromArray(types);
var levelsOBservable = Rx.Observable.fromArray(levels);
var uiUsers= [];// not really needed because I will use the same users array again.
usersObservable.map(function(user) {
typesObservable.filter(function(type) {
return type.id == user.type;
}).subscribeOnNext(function(userType) {
user.typeDescription = userType.description;
});
return user;
}).map(function(user) {
levelsOBservable.filter(function(level) {
return level.id == user.levelId;
}).subscribeOnNext(function(level) {
user.levelDescription = level.description;
});
return user;
})
.subscribeOnNext(function(user) {
uiUsers.push(user);
})
I would like to have a solution without nested Observables.
Thanks.
I am not sure why you are using Rx at all for this problem. You have data in space (i.e. arrays), not data over time (i.e. an observable sequence). But you force these arrays into Rx to then create a very complicated solution.
I think you are looking for something like the answer here https://stackoverflow.com/a/17500836/393615 where you would join the source array types. In your case you just "inner-join" twice to combine all three data sets.
You can archive this by using the switchMap operator that combines the result of a filtered stream with the latest value of the original stream and uses a projection function to merge the results into a single object. This can be generalised in your example such that you can use a generic higher order function in both cases. See fiddle.
Full code (ES2015, RxJS5):
const users = [
{ name: "A", type: 2, level: 1 },
{ name: "B", type: 1, level: 2 }
];
const types = [
{ description: "Type 1", id: 1 },
{ description: "Type 2", id: 2 }
];
const levels = [
{ description: "Level 1", id: 1 },
{ description: "Level 2", id: 2 }
];
const users$ = Rx.Observable.from(users);
const types$ = Rx.Observable.from(types);
const levels$ = Rx.Observable.from(levels);
function join(s$, sourceProperty, targetProperty, streamProperty) {
return function(initObj) {
const stream$ = s$.filter(x => x.id === initObj[sourceProperty]);
return Rx.Observable.combineLatest(
Rx.Observable.of(initObj),
stream$,
(obj, streamObj) => {
const prop = streamObj[streamProperty];
return Object.assign({}, obj, { [targetProperty]: prop });
}
);
};
}
users$
.switchMap(join(types$, 'type', 'typeDescription', 'description'))
.switchMap(join(levels$, 'level', 'levelDescription', 'description'))
.subscribe(x => console.log(x));
Say I have the following array of objects:
dataArray = [
{ id: "a", score: 1 },
{ id: "b", score: 2 },
{ id: "c", score: 5 },
...
{ id: "a", score: 3 },
...
{ id: "c", score: 2},
...
]
How can I obtain a resultArray like the following:
resultArray = [
{ id: "a", score: sum of all the scores when id is a },
{ id: "b", score: sum of all the scores when id is b },
...
...
]
If you use the underscore library:
_.map _.groupBy(dataArray, 'id'), (v, k) ->
{id: k, score: _.reduce(v, ((m, i) -> m + i['score']), 0) }
The Underscore version is probably the most succinct. This is a plain CoffeeScript version that only creates one auxiliary object to have fast access by id and make the whole thing O(n):
aggregateScores = (dataArr) ->
scores = {}
for {id, score} in dataArr
scores[id] = (scores[id] or 0) + score
{id, score} for id, score of scores
console.log aggregateScores [
{ id: "a", score: 1 }
{ id: "b", score: 2 }
{ id: "c", score: 5 }
{ id: "a", score: 3 }
{ id: "c", score: 2 }
]
# Output:
# [{id:"a", score:4}, {id:"b", score:2}, {id:"c", score:7}]
This is just plain JavaScript, but here is the long answer to your question:
function aggregate(values, init, keyGetter, valueGetter, aggregator) {
var results = {}
for (var index = 0; index != values.length; ++index) {
var value = values[index]
var key = keyGetter(value)
var soFar;
if (key in results) {
soFar = results[key]
} else {
soFar = init
}
value = valueGetter(value)
results[key] = aggregator(soFar, value)
}
return results
}
var array = [
{ id: 'a', score: 1 },
{ id: 'b', score: 2 },
{ id: 'c', score: 5 },
{ id: 'a', score: 3 },
{ id: 'c', score: 2 }
]
function keyGetter(value) {
return value.id
}
function valueGetter(value) {
return value.score
}
function aggregator(sum, value) {
return sum + value
}
function ready() {
var results = aggregate(array, 0, keyGetter, valueGetter, aggregator)
console.info(results)
}
Here's a straightforward coffeescript version:
data = [
{ id: "a", score: 1 }
{ id: "b", score: 2 }
{ id: "a", score: 5 }
{ id: "c", score: 2 }
{ id: "b", score: 3 }
]
# Aggregate scores in a map.
resultSet = {}
for obj in data
resultSet[obj.id] ?= 0
resultSet[obj.id] += obj.score
console.log resultSet
# Create array from map.
resultArr = for key, val of resultSet
{ id: key, score: val}
console.log resultArr
The output is:
{ a: 6, b: 5, c: 2 }
[ { id: 'a', score: 6 },
{ id: 'b', score: 5 },
{ id: 'c', score: 2 } ]
I'm sure it's possible to create a fancier solution using the functions in underscore, but the coffeescript solution isn't bad so I went for something simple to understand.
It's a bit overkill if this is the only aggregation you want to do but there is a nicely documented aggregation library called Lumenize, that does simple group-by operations like this in addition to more advanced pivot table, n-dimensional cubes, hierarchical roll-ups, and timezone-precise time-series aggregations.
Here is the jsFiddle for a Lumenize solution.
If you want to try it in node.js:
npm install Lumenize --save
then put this into a file named lumenizeGroupBy.coffee:
lumenize = require('Lumenize')
dataArray = [
{ id: "a", score: 1 },
{ id: "b", score: 2 },
{ id: "c", score: 5 },
{ id: "a", score: 3 },
{ id: "c", score: 2}
]
dimensions = [{field:'id'}]
metrics = [{field: 'score', f: 'sum', as: 'sum'}]
config = {dimensions, metrics}
cube = new lumenize.OLAPCube(config, dataArray)
console.log(cube.toString(null, null, 'sum'))
and run
coffee lumenizeGroupBy.coffee