I use Meteor with WireTiger 3.2.12
I've made dump using monogdump
When I'm trying to restore it, on some documents I recieved this:
error: write to oplog failed: BadValue: object to insert exceeds
cappedMaxSize
This collection is not 'Capped' (I tested it with db.my_collection_name.stats()["capped"])
How it's possible to import such documents?
Thanks in advance
It could be that the collection was previously created as capped and it still has the property attached to it.
Check the collection properties, or (if you don't need the data, drop it and re-create it)
I got the same error when trying to insert an object with recursive properties (one of objects was self referencing).
Related
As I was importing a new json file to my mongodb collection I've accidentally use just one '-' instead of 2. Eg.:
mongoimport --host=127.0.0.1 --db=dataBaseName -collection=people --file=importFile.json
I believe that due to the lack of the second '-', now I'm stuck with the following results when I type show collections:
people
ollection=people
I can't access, drop or interact with the second one. Apart from droping the database and starting over, is there a way around this issue?
You can rename the collection like:
> use YourDatabase
// Might wanna drop people collection first
> db.getCollection("ollection=people").renameCollection("people")
Hope This helps!
I'd like to update an array element in mongodb. In the mongodb shell this works:
db.ipolls.update({_id:"5Qu9fXG84tNSZo7sv","players.label":"R1"},{$inc:{"players.$.score":1}});
But when I run this in meteor:
Ipolls.update( {_id:pollster,"players.label":notChosen.label},{$inc:{"players.$.comparisons":1}});
I get the error: Uncaught Error: Not permitted. Untrusted code may only update documents by ID. [403]
Is it possible to run this query on the client side?
On the client you can only use the _id field as a selector. You've used {_id:pollster,"players.label":notChosen.label}
This is a meteor thing, its just to make it a bit safer. You could theoretically create a weird selector and get information out of the .allow rule checks were this not the case.
Query for the document first, then use that to update it:
var doc_to_update = Ipolls.findOne({_id:pollster,"players.label":notChosen.label});
Ipolls.update( {_id: doc_to_update._id},{$inc:{"players.$.comparisons":1}});
I got two class, one reference another by using #Reference
When inserting I will insert the referenced one first and insert the object with the reference field later.
Everything works fine when I fetch them in the most of time.But sometimes, I got exceptions like
SEVERE: java.lang.RuntimeException:
com.google.code.morphia.mapping.MappingException: The reference({
"$ref" : "UserContactLink", "$id" : "50e92481cde5dadc12ff854b" })
could not be fetched for net.shisoft.db.obj.UserContact.ucs
When I checked the id in UserContactLink and there is no such document with this id. I think this is because I terminate the progress of mongod last time and the transaction (in my viewpoint) didn't finished and the data relation has been corrupted.
Seems mongodb don't have transaction feature, what can I do with this issue?
There are no transactions. In many cases you can restructure your documents to avoid problems with that (embedding documents,...)
You will always need to insert the referenced document first. Upon insert, the MongoDB server creates the ObjectId of the entity which is then used in the reference. You might want to check for the ID before you reference (simple check for null).
I have a collection in which all of my documents have at least these 2 fields, say name and url (where url is unique so I set up a unique index on it). Now if I try to insert a document with a duplicate url, it will give an error and halt the program. I don't want this behavior, but I need something like mysql's insert or ignore, so that mongoDB should not insert the document with duplicate url and continue with the next documents.
Is there some parameter I can pass to the insert command to achieve this behavior? I generally do a batch of inserts using pymongo as:
collection.insert(document_array)
Here collection is a collection and document_array is an array of documents.
So is there some way I can implement the insert or ignore functionality for a multiple document insert?
Set the continue_on_error flag when calling insert(). Note PyMongo driver 2.1 and server version 1.9.1 are required:
continue_on_error (optional): If True, the database will not stop
processing a bulk insert if one fails (e.g. due to duplicate IDs).
This makes bulk insert behave similarly to a series of single inserts,
except lastError will be set if any insert fails, not just the last
one. If multiple errors occur, only the most recent will be reported
by error().
Use insert_many(), and set ordered=False.
This will ensure that all write operations are attempted, even if there are errors:
http://api.mongodb.org/python/current/api/pymongo/collection.html#pymongo.collection.Collection.insert_many
Try this:
try:
coll.insert(
doc_or_docs=doc_array,
continue_on_error=True)
except pymongo.errors.DuplicateKeyError:
pass
The insert operation will still throw an exception if an error occurs in the insert (such as trying to insert a duplicate value for a unique index), but it will not affect the other items in the array. You can then swallow the error as shown above.
Why not just put your call to .insert() inside a try: ... except: block and continue if the insert fails?
In addition, you could also use a regular update() call with the upsert flag. Details here: http://www.mongodb.org/display/DOCS/Updating#Updating-update%28%29
If you have your array of documents already in memory in your python script, why not insert them by iterating through them, and simply catch the ones that fail on insertion due to the unique index?
for doc in docs:
try:
collection.insert(doc)
except pymongo.errors.DuplicateKeyError:
print 'Duplicate url %s' % doc
Where collection is an instance of a collection created from your connection/database instances and docs is the array of dictionaries (documents) you would currently be passing to insert.
You could also decide what to do with the duplicate keys that violate your unique index within the except block.
It is highly recommended to use upsert
stat.update({'location': d['user']['location']}, \
{'$inc': {'count': 1}},upsert = True, safe = True)
Here stat is the collection if visitor location is already present in the collection, count is increased by one, else count is set to 1.
Here is the link for documentation http://www.mongodb.org/display/DOCS/Updating#Updating-UpsertswithModifiers
What I am doing :
Generate array of MongoDB ids I want to insert (hash of some values in my case)
Remove existing IDs (I am using a Redis queue bcoz performance, but you can query mongo)
Insert your cleaned data !
Redis is perfect for that, you can use Memcached or Mysql Memory, according your needs
What I want:
I have a master collection of products, I then want to filter them and put them in a separate collection.
db.masterproducts.find({category:"scuba gear"}).copyTo(db.newcollection)
Of course, I realise the 'copyTo' does not exist.
I thought I could do it with MapReduce as results are created in a new collection using the new 'out' parameter in v1.8; however this new collection is not a subset of my original collection. Or can it be if I use MapReduce correctly?
To get around it I am currently doing this:
Step 1:
/usr/local/mongodb/bin/mongodump --db database --collection masterproducts -q '{category:"scuba gear"}'
Step 2:
/usr/local/mongodb/bin/mongorestore -d database -c newcollection --drop packages.bson
My 2 step method just seems rather inefficient!
Any help greatly appreciated.
Thanks
Bob
You can iterate through your query result and save each item like this:
db.oldCollection.find(query).forEach(function(x){db.newCollection.save(x);})
You can create small server side javascript (like this one, just add filtering you want) and execute it using eval
You can use dump/restore in the way you described above
Copy collection command shoud be in mongodb soon (will be done in votes order)! See jira feature.
You should be able to create a subset with mapreduce (using 'out'). The problem is mapreduce has a special output format so your documents are going to be transformed (there is a JIRA ticket to add support for another format, but I can not find it at the moment). It is also going to be very inefficent :/
Copying a cursor to a collection makes a lot of sense, I suggest creating a ticket for this.
there is also toArray() method which can be used:
//create new collection
db.creatCollection("resultCollection")
// now query for type="foo" and insert the results into new collection
db.resultCollection.insert( (db.orginialCollection.find({type:'foo'}).toArray())