Is there any possible do upsert functionality in "array of an object" using firestore query? - google-cloud-firestore

Example :[{
inst:"EVA",
std:"12th"
},
{
inst:"KSF",
std:"12th"
}]
As per the above example, In my case if "inst: "EVA" is already there in the "qualification" array so we need to update the object from the existing one.
Then "inst: "KSF" does not already exist in the "qualification" array so we need to add that one.
Help me if there is any way to upsert using firestore query.

There is no "upsert" operation for objects in arrays. If you need to make changes to that array, you will have to read the document, modify the contents of the array in memory, then update the document with the new contents of the array.
Arrays of objects usually do not work the way that people want, given their limitations on querying and updating. It's usually better to store data as documents in a nested subcollection, so they can be more easily queried and updated by the contents of their fields.

Related

MongoDB Bulk Find and Replace of ObjectId on a single Document

We have two documents that have merged and they now have one one ObjectId.
There exists a configuration document that may have references to the old ObjectId. The old ObjectID can exist all over this document which is full of nested arrays and lists.
We want to do a simple find and replace on this document, preferably without replacing the entire document itself.
Is there a generic way to set every field that has ObjectIdA as a value and replace it with ObjectIdB?
There's no way to do that, no. You need to perform updates on all possible paths explicitly.

mongodb EmbedMany strategy=set

I had a collection with an embedMany attribute using strategy=set, so an ArrayCollection was stored. However we deleted some items from the array and now there are some documents with keys not sequential integers.
I need to solve this inconsistence, how can I do that?
You could use $type operator and query for all documents where your embedManyField is of type object. Once you have these documents, apply array_values to fields where array shall be stored and save them again. Also to avoid such situations in future you should change your collection's strategy to either setArray or atomicSetArray.

MongoDB 2.2 - Updating Array Nested Document

Is it possible to update a single document field in the Level3 array using $update and $elemMatch? I realize I cannot use the positional operator multiple times given this case and historically I've modified the Level2 nested document with the required deeper changes since these documents aren't very large. I'm hoping there is some syntax that makes it possible to update Level3 array documents using $elemMatch without knowing the position of the target document in the Level3 array or containing document in Level2.
Example:
db.collection.update({_id:'123', level2:{$elemMatch:{'level3.id':'bbb','level3.e1':'hij'}},{'level2.level3.createdDate':new Date()})
{
_id:'123',
f1:'abc',
f2:'def',
level2:[
{_
id:'aaa',
e1:'hij',
e2:'lmo'
level3:[
{
name:'foo',
type:'bar',
createdDate:'2013-3-28T05:18:00'
}]
},
{_
id:'bbb',
e1:'hij',
e2:'lmo'
level3:[
{
name:'foo2',
type:'bar2',
createdDate:'2013-3-28T05:19:00'
}]
}
]
}
There is no way to do this currently using a regular update operation for reasons you noted.
The only work around you can use at the moment is to add versioning to your document and use optimistic locking by reading the document, finding the appropriate elements to modify in your application, changing their values and then using an update that includes the version in the read document (so that if other thread updated the document between your query and your update you would not overwrite the changes but would have to reload the document and try again.
The versioning strategy would not have to be based on the entire document, you can version the first level array elements and then you would be able to update just the sub-array you were concerned with (via an update with $set).

In MongoDB, do document _id's need to be unique across a collection or the entire DB?

I'm building a database with several collections. I have unique strings that I plan on using for all the documents in the main collection. Documents in other collections will reference documents in the main collection, which means I'll have to save said id's in the other collections. However, if _id's only need to be unique across a collection and not across an entire database, then I would just make the _id's in the other collections also use the aforementioned unique strings.
Also, I assume that in order to set my own _id's, all I have to do is have an "_id":"unique_string" property as part of the document that I insert, correct? I wouldn't need to convert the "unique_string" into another format, right?
Also, hypothetically speaking, would I be able to have a variable save the string "_id" and use that instead? Just to be clear, something as follows: var id = "_id" and then later on in the code (during an insert or a query for example) have id:"unique_string".
Best, and thanks,Sami
_ids have to be unique in a collection. You can quickly verify this by inserting two documents with the same _id in two different collections.
Your other assumptions are correct, just try them and see whether they work (they will). The proof of the pudding is in the eating.
Note: use _id directly, var id = "_id" just compilcates the code.

MongoDB: Speed of field ("inside record") search in comporation with speed of search in "global scope"

My question may be not very good formulated because I haven't worked with MongoDB yet, so I'd want to know one thing.
I have an object (record/document/anything else) in my database - in global scope.
And have a really huge array of other objects in this object.
So, what about speed of search in global scope vs search "inside" object? Is it possible to index all "inner" records?
Thanks beforehand.
So, like this
users: {
..
user_maria:
{
age: "18",
best_comments :
{
goodnight:"23rr",
sleeptired:"dsf3"
..
}
}
user_ben:
{
age: "18",
best_comments :
{
one:"23rr",
two:"dsf3"
..
}
}
So, how can I make it fast to find user_maria->best_comments->goodnight (index context of collections "best_comment") ?
First of all, your example schema is very questionable. If you want to embed comments (which is a big if), you'd want to store them in an array for appropriate indexing. Also, post your schema in JSON format so we don't have to parse the whole name/value thing :
db.users {
name:"maria",
age: 18,
best_comments: [
{
title: "goodnight",
comment: "23rr"
},
{
title: "sleeptired",
comment: "dsf3"
}
]
}
With that schema in mind you can put an index on name and best_comments.title for example like so :
db.users.ensureIndex({name:1, 'best_comments.title:1})
Then, when you want the query you mentioned, simply do
db.users.find({name:"maria", 'best_comments.title':"first"})
And the database will hit the index and will return this document very fast.
Now, all that said. Your schema is very questionable. You mention you want to query specific comments but that requires either comments being in a seperate collection or you filtering the comments array app-side. Additionally having huge, ever growing embedded arrays in documents can become a problem. Documents have a 16mb limit and if document increase in size all the time mongo will have to continuously move them on disk.
My advice :
Put comments in a seperate collection
Either do document per comment or make comment bucket documents (say,
100 comments per document)
Read up on Mongo/NoSQL schema design. You always query for root documents so if you end up needing a small part of a large embedded structure you need to reexamine your schema or you'll be pumping huge documents over the connection and require app-side filtering.
I'm not sure I understand your question but it sounds like you have one record with many attributes.
record = {'attr1':1, 'attr2':2, etc.}
You can create an index on any single attribute or any combination of attributes. Also, you can create any number of indices on a single collection (MongoDB collection == MySQL table), whether or not each record in the collection has the attributes being indexed on.
edit: I don't know what you mean by 'global scope' within MongoDB. To insert any data, you must define a database and collection to insert that data into.
Database 'Example':
Collection 'table1':
records: {a:1,b:1,c:1}
{a:1,b:2,d:1}
{a:1,c:1,d:1}
indices:
ensureIndex({a:ascending, d:ascending}) <- this will index on a, then by d; the fact that record 1 doesn't have an attribute 'd' doesn't matter, and this will increase query performance
edit 2:
Well first of all, in your table here, you are assigning multiple values to the attribute "name" and "value". MongoDB will ignore/overwrite the original instantiations of them, so only the final ones will be included in the collection.
I think you need to reconsider your schema here. You're trying to use it as a series of key value pairs, and it is not specifically suited for this (if you really want key value pairs, check out Redis).
Check out: http://www.jonathanhui.com/mongodb-query