Parse query withinKilometers() is not working after I migrated the database to my own mongoDB server (No error, but response is empty).
The issue is being discussed in github
But they say it is issue of mongoDB version.
I tried using mongo 3.0.11 , 3.0.9 and 3.0.0
A workaround mentioned is by using cloud code, but the query fails in the cloud too.
Any one have some other workaround please help as the last date of parse data migration is around the corner.
We have to create index for our database column for mongoDB to support geoQuerry, by default it will not support it seems.
Solution:
db.prod.createIndex({ "location": "2d" })
Where prod is my collection name and location is name of column which stores geo location (GeoPoint)
Few more details about the issue is discussed here
Thanks
Related
This code is working fine on my local machine.
Bulk.find({"xyz":23}).upsert().update({$set : 5465});
Bulk.execute(function (err, data) {});
When I have moved this code to Azure, it wasnt working. I recognize that cosmosDB doesn't support upsert. Is that ryt ?
Reference :
https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-feature-support#database-commands
Should i replace with find and insert or update as normal ? or is there any other solution available ? Please help.
Yes, based on the doc MongoDB query language support , upsert() command is not supported by cosmos db mongo api. As I known, no shortcuts here so far. You need to encapsulate methods to determine whether a document exists, and then decide to insert or update.
Or, just to declare that the Cosmos Document DB SDK supports Upsert method. Please refer to the case: How can I perform an UPSERT using Azure DocumentDB?.
Cosmos Document DB is a good choice if you could do data migrations.
Hope it helps you.
UPDATE
The CosmoDB team confirmed that there is an issue on their said and they are already working on a fix.
More info in the comment section here: https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb-introduction
ORIGINAL QUESTION
we are planning to migrate to CosmoDB but we found an issue with the $sort command.
In our current MongoDB server running this query:
db.getCollection('Product').find({
"ProductTypeId" : ObjectId("5913546b1ba88338e4347641"),
"SubtypeIngredients" : "5949852c1ba88344d0facbf5"
})
.skip(0).sort({ "IngredientRanks.2.Rank" : 1 }).limit(1)
We get some results but when running the same query in CosmoDB we don't get any results.
if I remove the sort command from the query, I get results from CosmoDB
The data in the collection is the same in our local db and CosmoDB.
Any help would be appreciated.
Thanks!
Update:
Here is an screenshot of the actual query showing the issue.
There is no specific guarantee that CosmoDB supports all the operators and functions of MongoDB, particularly non-trivial use of the API (like chaining sort, skip , etc.). This extends to index optimization and selection as well.
This is what is mentioned in the cosmosdb website
I got 2 Azure Cosmos DB's which runs in MongoDb. The first one was created by someone else and second one was created by me.
If I query my database, my structure gets very weird with a $t and $v property.
My structure:
Structure of the other DB (like it should be):
My backend does work properly with both, but I want to add Azure Search and I can't do this with my structure. Why is this happening and how can I fix it?
(If someone can make this a comment instead of an answer, it would be appreciated)
I'm getting the same thing. It looks like Microsoft gave an answer here as to why it happens: http://answers.flyppdevportal.com/MVC/Post/Thread/e0ffdbcd-0b43-4cd5-9d21-1a95ce0279dd?category=azuredocumentdb
The incorrect format is a result of intermixing the Cosmos DB: MongoDB API with the Cosmos DB: DocumentDB API.
It's not very clear to me though why those two are intermixed since MongoDb maintains the MongoDB API NuGet.
This is happening because one of your Cosmos DB MOngoDB API accoutn is older than the other - older one uses JSON schema (used by SQL API as well), while newer uses more versatile (and MongoDB compliant) BSON schema. You just need to request the conversion of the account to a new schema by emailing askcosmosmongoapi#microsoft.com
I am new to MongoDB so please bear with me. I have a MongoDB that can be updated from two places - from an admin panel in PHP and from a deployed server using Morphia. My question is that if MongoDB is updated from the admin panel after the datastore is created using Morphia, then how do I get the updated values from the db into the data store?(I have tried to search for this but all queries just point to how to update the data store in Morphia. It could be that I am formatting the query wrong.) Does it automatically get updated in the datastore? or Do I have to keep discarding the existing datastore and create a new one?In that case the question arises that what would be the best way to do that?
Also, how would saves from Morphia and updates from the admin panel be handled so that there is no conflict?
The Datastore doesn't cache anything. It's simply a conduit through which to execute database operations. If you query very Morphia after updating from your PHP app, you'll see your new data just fine. You don't need to create a new Datastore each time.
I've been test driving Alteryx for the last week or so and was wondering if anyone has successfully connected to a Mongo Labs data base using the Alteryx Mongo Input and Output tools. I've tried numerous times and can's seem to get it to work.
Yes, I am using that MongoDB extractor on a daily basis to generate TDE files to feed Tableau, since Tableau does not offer dedicated extractors, but I admit it was difficult to get started.
Extraction is now swift from a MongoDB running on AWS, make sure you indicate the proper collection, and be aware of one flaw with the current version of extractor, as of Alteryx 9.5: it is missing the flag to send to the DB that would let you read from a Read Only configured MongoDB. It is on the priority list for V10. In the meantime, you should connect to the DB that can be written to, and to ease your DBA, show that Alteryx workflows can't write to it unless you drag into the workflow the MongoDB Output Tool.
Also you can use the Properties / Criteria window to set filters on the indexed field of your MongoDB. After much research, I found out the precise syntax for filtering the extraction by date:
{_id: {$gt: ObjectId("54a4fe800000000000000000")}}
Since each MongoDB ObjectId contains an embedded timestamp of its creation time.
To get the proper time, you can use this excellent website:
http://steveridout.github.io/mongo-object-time/
And if you need fancier filters, here is some help with the syntax:
http://www.querymongo.com/
I hope that helps...