JSON to POJO not working when same key is repeated in array - pojo

I have below json
{
"startDate": "2021-02-01",
"endDate": "2021-02-14",
"columns": [
{
"attribute": "PID"
},
{
"attribute": "CID"
},
{
"attribute": "SID"
}
],
"ids": [
{
"id": "123456A",
"idType": "PID"
}
]
}
As you can see columns array have same key 'attribute'.
I created pojo for this but i am not able to add data.
-----------POJO--------
public class Columns {
private String attribute;
public String getAttribute() {
return attribute;
}
public void setAttribute(String attribute) {
this.attribute = attribute;
}
}
Other pojo
public List<Columns> getColumns() {
return columns;
}
public void setColumns(List<Columns> columns) {
this.columns = columns;
}
I am adding data like this
Columns c=new Columns();
c.setAttribute("PID");
List<Columns> l=new ArrayList<Columns>();
l.add(c);
c.setAttribute("CID");
l.add(c);
c.setAttribute("SID");
l.add(c);
m.setColumns(l);
and its giving output like this (value of m)
"columns": [
{
"attribute": "SID"
},
{
"attribute": "SID"
},
{
"attribute": "SID"
}
],
What i am doing wrong ?

I think you should create a class Column (singular) so you can create a list like so:
List<Column> l=new ArrayList<Column>();
c = new Column();
c.setAttribute("PID");
l.add(c);
c2 = new Column();
c.setAttribute("SID");
l.add(c2);
...
You likely want to use a loop above.
In your current setup, you keep changing the c object which means that the last change you make to it will be assigned to all items (they all point to c).

Related

How can I delete or update particular object in an array of object while using mongoose and nestjs

I want to update or delete a particular field (subTags:) in an array of objects.
{
"_id": {
"$oid": "63c175a0ec5dac10b35ac9da"
},
"name": "clerk",
"description": "clerical duties including typing and filing.",
"tags": "assistant receptionist typist ",
"__v": 0,
"subCategory": [
{
"name": "tele-clerk",
"subTags": [
"assistant",
"receptionist"
]
},
{
"name": "administrative-clerk",
"subTags": [
"assistant",
"receptionist",
"typist"
]
}
]
}
I tried using :
const updatedSubCategory = await this.categories.findOneAndUpdate({name:category.name,"subCategory.name": category.subCategory.name},{ $addToSet:{subCategory: category.subCategory}},{new:true})
but it is creating another new object inside subcategory array of same name I don't want that to happen.The name field in the subcategory must be unique.
my schema is :
#Schema({collection:'category'})
export class SubCategorySchema {
#Prop({unique:true})
name:string
#Prop()
tags:string[]
}
#Schema({collection:'category'})
export class CategorySchema {
#Prop({unique:true, lowercase:true})
name:string
#Prop({lowercase:true})
subCategory:SubCategorySchema[]
#Prop({lowercase:true})
description:string
#Prop({lowercase:true})
tags:string
}

MongoDB - Get Names of All Keys Matching Criteria in a Collection

As the title says, I need to retrieve the names of all the keys in my MongoDB collection, BUT I need them split up based on a key/value pair that each document has. Here's my clunky analogy: If you imagine the original collection is a zoo, I need a new collection that contains all the keys Zebras have, all the keys Lions have, and all the keys Giraffes have. The different animal types share many of the same keys, but those keys are meant to be specific to each type of animal (because the user needs to be able to (for example) search for Zebras taller than 3ft and giraffes shorter than 10ft).
Here's a bit of example code that I ran which worked well - it grabbed all the unique keys in my entire collection and threw them into their own collection:
db.runCommand({
"mapreduce" : "MyZoo",
"map" : function() {
for (var key in this) { emit(key, null); }
},
"reduce" : function(key, stuff) { return null; },
"out": "MyZoo" + "_keys"
})
I'd like a version of this command that would look through the MyZoo collection for animals with "type":"zebra", find all the unique keys, and place them in a new collection (MyZoo_keys) - then do the same thing for "type":"lion" & "type":"giraffe", giving each "type" its own array of keys.
Here's the collection I'm starting with:
{
"name": "Zebra1",
"height": "300",
"weight": "900",
"type": "zebra"
"zebraSpecific1": "somevalue"
},
{
"name": "Lion1",
"height": "325",
"weight": "1200",
"type": "lion",
},
{
"name": "Zebra2",
"height": "500",
"weight": "2100",
"type": "zebra",
"zebraSpecific2": "somevalue"
},
{
"name": "Giraffe",
"height": "4800",
"weight": "2400",
"type": "giraffe"
"giraffeSpecific1": "somevalue",
"giraffeSpecific2": "someothervalue"
}
And here's what I'd like the MyZoo_keys collection to look like:
{
"zebra": [
{
"name": null,
"height": null,
"weight": null,
"type": null,
"zebraSpecific1": null,
"zebraSpecific2": null
}
],
"lion": [
{
"name": null,
"height": null,
"weight": null,
"type": null
}
],
"giraffe": [
{
"name": null,
"height": null,
"weight": null,
"type": null,
"giraffeSpecific1": null,
"giraffeSpecific2": null
}
]
}
That's probably imperfect JSON, but you get the idea...
Thanks!
You can modify your code to dump the results in a more readable and organized format.
The map function:
Emit the type of animal as key, and an array of keys for
each animal(document). Leave out the _id field.
Code:
var map = function(){
var keys = [];
Object.keys(this).forEach(function(k){
if(k != "_id"){
keys.push(k);
}
})
emit(this.type,{"keys":keys});
}
The reduce function:
For each type of animal, consolidate and return the unique keys.
Use an Object(uniqueKeys) to check for duplicates, this increases the running
time even if it occupies some memory. The look up is O(1).
Code:
var reduce = function(key,values){
var uniqueKeys = {};
var result = [];
values.forEach(function(value){
value.keys.forEach(function(k){
if(!uniqueKeys[k]){
uniqueKeys[k] = 1;
result.push(k);
}
})
})
return {"keys":result};
}
Invoking Map-Reduce:
db.collection.mapReduce(map,reduce,{out:"t1"});
Aggregating the result:
db.t1.aggregate([
{$project:{"_id":0,"animal":"$_id","keys":"$value.keys"}}
])
Sample o/p:
{
"animal" : "lion",
"keys" : [
"name",
"height",
"weight",
"type"
]
}
{
"animal" : "zebra",
"keys" : [
"name",
"height",
"weight",
"type",
"zebraSpecific1",
"zebraSpecific2"
]
}
{
"animal" : "giraffe",
"keys" : [
"name",
"height",
"weight",
"type",
"giraffeSpecific1",
"giraffeSpecific2"
]
}

Sorting by document values in couchbase and scala

I am using couchbase and I have a document (product) that looks like:
{
"id": "5fe281c3-81b6-4eb5-96a1-331ff3b37c2c",
"defaultName": "default name",
"defaultDescription": "default description",
"references": {
"configuratorId": "1",
"seekId": "1",
"hsId": "1",
"fpId": "1"
},
"tenantProducts": {
"2": {
"adminRank": 1,
"systemRank": 15,
"categories": [
"3"
]
}
},
"docType": "product"
}
I wish to get all products (this json is product) that belong to certain category, So i've created the following view:
function (doc, meta) {
if(doc.docType == "product")
{
for (var tenant in doc.tenantProducts) {
var categories = doc.tenantProducts[tenant].categories
// emit(categories, doc);
for(i=0;i<categories.length;i++)
{
emit([tenant, categories[i]], doc);
}
}
}
}
So i can run the view with keys like:
[["tenantId", "Category1"]] //Can also have: [["tenant1", "Category1"],["tenant1", "Category2"] ]
My problem is that i receive the document, but i wish to sort the documents by their admin rank and system rank, these are 2 fields that exists in the "value".
I understand that the only solution would be to add those fields to my key, determine that my key would be from now:
[["tenantId", "Category1", "systemRank", "adminRank"]]
And after i get documents, i need to sort by the 3rd and 4th parameters of the key ?
I just want to make sure i understand this right.
Thanks

mongodb conditional query greater than $gt returns zero results

I have a mongodb database which has a users collection containing the following document
{
"_id": ObjectId("5161446e03642eab4a818fcd"),
"id": "35",
"userInfo": {
"name": "xyz",
"rollNumber": 121
}
}
I want to get all the rows whose id is greater than a specific value
#GET
#Path("/query")
#Produces({ MediaType.APPLICATION_JSON })
public List<String> getLactionInfo(#QueryParam("from") int from) {
BasicDBObject query = new BasicDBObject();
// check if ids greater than 0
query.put("id", new BasicDBObject("$gt", from));
// get the collection of users
DBCursor cursor = getTable("users").find(query);
List<String> listUsers = new ArrayList<String>();
while (cursor.hasNext()) {
DBObject object = cursor.next();
String id = cursor.next().get("_id").toString();
object.put("_id", id);
String objectToString = object.toString();
listUsers.add(objectToString);
}
return listUsers;
}
When I debugged my code its showing that the listUsers is null.Also when I manually run the following query in the console I get no results.
db.users.find({id:{$gt:60}})
The id in your sample data is stored as a string. So, the $gt check is trying to compare an integer to a string.
If you switch your id value to an integer, it should work as expected.
For example:
db.test.insert( { "id": 40, "name": "wired" } )
db.test.insert( { "id": 60, "name": "prairie" } )
db.test.insert( { "id": 70, "name": "stack" } )
db.test.insert( { "id": 80, "name": "overflow" } )
db.test.insert( { "id": "90", "name": "missing" } )
Then, the test:
> db.test.find({"id": { "$gt": 60 }}).pretty()
{
"_id" : ObjectId("516abfdf8e7f7f35107081cc"),
"id" : 70,
"name" : "stack"
}
{
"_id" : ObjectId("516abfe08e7f7f35107081cd"),
"id" : 80,
"name" : "overflow"
}
For a quick data fix up you could do something like this from the shell (paste as one line and change myCollection to your collection name:
db.myCollection.find().forEach(function(doc) { doc.id = parseInt(doc.id, 10);
db.test.save(doc); })

Merge changeset documents in a query

I have recorded changes from an information system in a mongo database. Every time a set of values are set or changed, a record is saved in the mongo database.
The change collection is in the following form:
{ "user_id": 1, "timestamp": { "date" : "2010-09-22 09:28:02", "timezone_type" : 3, "timezone" : "Europe/Paris" } }, "changes: { "fieldA": "valueA", "fieldB": "valueB", "fieldC": "valueC" } }
{ "user_id": 1, "timestamp": { "date" : "2010-09-24 19:01:52", "timezone_type" : 3, "timezone" : "Europe/Paris" } }, "changes: { "fieldA": "new_valueA", "fieldB": null, "fieldD": "valueD" } }
{ "user_id": 1, "timestamp": { "date" : "2010-10-01 11:11:02", "timezone_type" : 3, "timezone" : "Europe/Paris" } }, "changes: { "fieldD": "new_valueD" } }
Of course there are thousands of records per user with different attributes which represent millions of records. What I want to do is to see a user status at a given time. By example, the user_id 1 at 2010-09-30 would be
fieldA: new_valueA
fieldC: valueC
fieldD: valueD
This means I need to flatten all the changes prior to a given date for a given user into a single record. Can I do that directly in mongo ?
Edit: I am using the 2.0 version of mongodb hence cannot benefit from the aggregation framework.
Edit: It sounds I have found the answer to my question.
var mapTimeAndChangesByUserId = function() {
var key = this.user_id;
var value = { timestamp: this.timestamp.date, changes: this.changes };
emit(key, value);
}
var reduceMergeChanges = function(user_id, changeset) {
var mergeFunction = function(a, b) { for (var attr in b) a[attr] = b[attr]; };
var result = {};
changeset.forEach(function(e) { mergeFunction(result, e.changes); });
return { timestamp: changeset.pop().timestamp, changes: result };
}
The reduce function merges the changes in the order they come and returns the result.
db.user_change.mapReduce(
mapTimeAndChangesByUserId,
reduceMergeChanges,
{
out: { inline: 1 },
query: { user_id: 1, "timestamp.date": { $lt: "2010-09-30" } },
sort: { "timestamp.date": 1 }
});
'results' : [
"_id": 1,
"value": {
"timestamp": "2010-09-24 19:01:52",
"changes": {
"fieldA": "new_valueA",
"fieldB": null,
"fieldC": "valueC",
"fieldD": "valueD"
}
}
]
Which is fine to me.
You could write a MR to do this.
Since the fields are a lot like tags you can modify a nice cookbook example of counting tags here: http://cookbook.mongodb.org/patterns/count_tags/ of course instead of counting you want the latest value applied (assumption since this is not clear in your question) for that field.
So lets get our map function:
map = function() {
if (!this.changes) {
// If there were not changes for some reason lets bail this record
return;
}
// We iterate the changes
for (index in this.changes) {
emit(index /* We emit the field name */, this.changes[index] /* We emit the field value */);
}
}
And now for our reduce:
reduce = function(values){
// This part is dependant upon your input query. If you add a sort of
// date (ts) DESC then you will prolly want the first index (0) not the last as
// gathered here by values.length
return values[values.length];
}
And this will output a single document per field change of the type:
{
_id: your_field_ie_fieldA,
value: whoop
}
You can then iterate the end of the (most likely) in line output and, bam, you have your changes.
This is of course one way of dong it and is not designed to be run completely in line to your app, however that all depends on the size of the data your working on; it could be run very close.
I am unsure whether the group and distinct can run on this but it looks like it might: http://docs.mongodb.org/manual/reference/method/db.collection.group/#db-collection-group however I should note that group is basically a MR wrapper but you could do something like (untested just like the MR above):
db.col.group( {
key: { 'changes.fieldA': 1, // the rest of the fields },
cond: { 'timestamp.date': { $gt: new Date( '01/01/2012' ) } },
reduce: function ( curr, result ) { },
initial: { }
} )
But it does require you to define the keys instead of just iterating them programmably (maybe a better way).