My problem is with Elasticasearch, I have 1564 indexes and 1564 documents in MongoDB (after my last populating operation : in Symfony with Elasticabundle :php app/console foqs:elastica:populate)
but when I add a document manually the number of indexes remains 1564 where it should be 1565
Did I miss something ?
The functionality to update Elasticsearch indexes when Doctrine entities are modified is documented in the README file under Realtime, selective index update. The configuration option is listeners, which falls under the persistence option you should already have defined per model.
Related
We have recently moved to mongo java driver core/sync 4.4.0 from 3.12.1 with spring data mongo db from 2.2.5.RELEASE to 3.3.0 also spring boot version 2.6.2 and mongo server version 4.2.5
We are getting exceptions while hitting upsert queries on sharded collection with above mentioned error
There is way to insert shard key in query filter but that is not feasible for us, hence we tried adding #Sharded annotations to our DTO as we have different shard keys for different collections
Still we are getting the above mentioned error also we are unable to get exact meaning of what is full copy of entity meant in below statement
update/upsert operations replacing/upserting a single existing document as long as the given UpdateDefinition holds a full copy of the entity.
Other queries are working fine also upsert queries are working fine on addition of shard key in query filter but that change is not feasible for us we need quick solution
Please help as not able to find any solution on any platform. Thanks in advance!
So here's the deal:
You cannot upsert on Sharded Collection in MongoDB UNLESS you include the shard key in the filter provided for update operation.
To quote MongoDB Docs:
For a db.collection.update() operation that includes upsert: true and is on a sharded collection, you must include the full shard key in the filter:
For an update operation.
For a replace document operation (starting
in MongoDB 4.2).
If you are using MongoDB 4.4+ then you have a workaround as mentioned below:
However, starting in version 4.4, documents in a sharded collection can be missing the shard key fields. To target a document that is missing the shard key, you can use the null equality match in conjunction with another filter condition (such as on the _id field). For example:
{ _id: <value>, <shardkeyfield>: null } // _id of the document missing shard key
Ref: - https://docs.mongodb.com/manual/reference/method/db.collection.update/#behavior
I'm reading mosql code and how it uses oplog collection from mongodb to copy to postgresql.
For updating case, for example:
I saw that mosql always update the whole document to postgresql instead of only the modified fields. That is really weird because there is not sense to update all the fields in postgresql table when I want to update only 1 or 2 fields. That is a problem because I'm using bigger documents.
Looking the code I saw that mosql uses the o field from oplog but it keeps the whole document and that is why mosql update all the fields in postgresql and there is not way to know what fields were updated.
Is there a way to figure out what fields were updated? to updating only that fields instead of the complete document?
I need to know abt how indexing in mongo improve query performance. And currently my db is not indexed. How can i index an existing DB.? Also is i need to create a new field only for indexing.?.
Fundamentally, indexes in MongoDB are similar to indexes in other database systems. MongoDB supports indexes on any field or sub-field contained in documents within a MongoDB collection.
Indexes are covered in detail here and I highly recommend reading this documentation.
There are sections on indexing operations, strategies and creation options as well as a detailed explanations on the various indexes such as compound indexes (i.e. an index on multiple fields).
One thing to note is that by default, creating an index is a blocking operation. Creating an index is as simple as:
db.collection.ensureIndex( { zip: 1})
Something like this will be returned, indicating the index was correctly inserted:
Inserted 1 record(s) in 7ms
Building an index on a large collection of data, the operation can take a long time to complete. To resolve this issue, the background option can allow you to continue to use your mongod instance during the index build.
Limitations on indexing in MongoDB is covered here.
I created MongoDB river with index in Elasticsearch, but later on noticed that I don't need several fields and I want to use different username/password for river.
How to:
1.Initialize update of river settings for new username/password?
2.Delete/exclude useless fields from index from Elasticsearch without rebuilding whole index from the beginning?
I have like 20-30GB of indexed data and whole process of getting data through river may take long hours.
All I found is Delete and PUT, but there's no update for index fields or river mentioned either in docs or in google.
It's not currently possible to remove a field from a mapping. In order to remove all values of a field from all documents, you need to reindex all documents with this field removed.
I have a question about indexes in MongoDB.
I am using MongoDB version 1.6.5. I am modifying all my collection indexes.
When I used the show collections command in my MongoDB shell, it showed one of my collections as
system.indexes
stocks
options
Do I need to drop the collection system.indexes to make the new indexes on the collections apply?
Thew system.profile collection is not there however, no you do not. The profile collection is the output of the profiler, nothing more. Indexes will still apply.
Edit
Since your question says two things, no you do not need to drop system.indexes either, MongoDB will handle updating the records in there for you. Dropping it might actually damage your database.