CouchDB sort doesn't work - nosql

I recently started using CouchDB but I have no success with the sort-option using mango query.
My database is a simple person dataset and for this example I'd like to show all documents with the type "person" and sort it by the "firstName".
{
"selector": {
"type": "person"
},
"sort" : ["firstName"]
}
But it doesnt work. I just get the documents without any particular order.
This is the list of my indexes:
special: _id
json: gender
json: firstName
json: lastName
Is there something I forgot?
Edit:
As Alexis also pointed out, the docs say that the sorted field has to be present in the selector.
After testing around I figured out that the best way would be adding a greater than null condition to the field. That works in my case.

Related

Can you update a collection in MongoDB and remove the first/last char on one field?

My question might be simple be here's some more context to it.
I have a MySQL DB, I've used an ETL tool to populate a MongoDBwith, however I couldn't manage to create proper ObjectId reference to it (I can only get a string of the ObjectId.
So far I've had an idea (maybe crazy but still.. could work)
I got this field populated like this in one document :
"field1" : "ObjectId('5d48845c456145ee9d1ccffde')",
What I would want to achieve through mongoDB is removing the first and last char to get (stripping the double quotes):
"field1" : ObjectId('5d48845c456145ee9d1ccffde'),
(note that MongoDB seems to automatically convert simple to Double quote after the change, so my reference become corret).
Problem is, I don't find anything close to a sort of Update script for MongoDB to achieve this.
Is there any way to do this ?
Using NodeJS could work, however, querying the document at this state doesn't return the field1 (probably cause it find it incorect)...
If its one time update, you can use the following query:
db.COLLECTION.aggregate([
{
$addFields:{
"field1":{
$toObjectId:{
$substrBytes:[
"$field1",
10,
24
]
}
}
}
},
{
$out:"COLLECTION"
}
])
In aggregation, the 'field1' is cast to ObjectId. Later on, the old data in the collection is replaced with the aggregated one.

How to query all documents, filter for a specific field and return the value for each document in Elasticsearch?

I'm currently running an Elasticsearch instance which is synchronizing from a MongoDB via river. The MongoDB contains entries like this:
{field1: "value1", field2: "value2", cars: ["BMW", "Ford", "Porsche"]}
Not every entry in Mongo does have a cars field.
Now I want to create an ElasticSearch query which is searching over every document and return just the cars field from every single document indexed in Elasticsearch.
Is it even possible? Elasticsearch must touch every single document to return the cars field. Maybe querying with Mongo is just easier and as fast as Elasticsearch. What do you think?
The following query POSTed to hostname:9200/_search should get you started:
{
"filter": {
"exists": {
"field": "cars"
}
},
"fields": ["cars"]
}
The filter clause limits the results to documents with a cars field.
The fields clause says to only return the cars field. If you wanted the entire document returned, you would leave this section out.
References:
https://www.elastic.co/guide/en/elasticsearch/reference/current/common-options.html#_response_filtering
Make elasticsearch only return certain fields?
Elasticsearch (from my understanding) is not intended to be a SSoT database. It is very good at text searching, and analytics aggregations, but it isn't necessarily intended to be your primary database.
However, your use case isn't necessarily non performant in elasticsearch, it sounds like you just want to filter for your cars field, which you can do as documented here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-fields.html
Lastly, I would actually venture that elasticsearch is faster than mongo in this case (assuming that the cars field is NOT indexed and elasticsearch is, which is their respective defaults), since you probably want to filter out the case in which the cars field is not set.
tl;dr elasticsearch isn't intended for your particularly use-case, but it probably is faster than mongo assuming you filter out the cars field being 'missing'

List documents in Meteor collection with duplicate first names

My 'Programs' collection would look like this (as an array);
[{ FullName: "Jane Doe", CampYear: "mays15",...}, { FullName "Jane Doe", CampYear: "mays16",...},...]
Some people in the collection are newbies and have just one document in the collection. Others have multiple documents and are returnees. We'd like the ability to mark or flag somehow the newbies. Somehow iterate through the collection and single out those who just have one document in there. The trouble is if I have a list of, say, 150 names, for each name I'd have to have a separate find operation on the collection, which is too intensive.
I tried using aggregation via the meteorhacks:aggregate but couldn't get it to work. After loading the package, my IDE wouldn't recognize the .aggregate method at all, even on the server.
Underscore might be a worthwhile way of doing it, but I couldn't find a method that might be of assistance.
Any ideas how we could do this?
Based on your comment, I'd probably denormalize your data. I'd have a new collection called CampAttendance or something like that. Then you'd have the structure:
{
"name": "The camper's name",
"years": ["mays2015", ...]
}
You can then use upsert to either insert a new record or $push another camp year onto the years array as you're importing data.
To get the camper names who are 'newbies' then, you do:
CampAttendance.find({ years: { $size: 1 } });

mongodb $addToSet failure, specify full document to insert

I've done a bit of research on this and haven't come across anything that jumps out at me immediately as what I'm looking for.
Say we have a document (or documents) in a collection that look something like this:
//First example document
{
"_id": "JK",
"letters": ["J", "K"]
}
//Second example document
{
"_id": "LM",
"letters": ["L"]
}
So I run a query like the one below to see if I have any matching documents and of course I don't so I expect to get null.
> db.example.findOne({"_id": "LM", "letters": {"$in": ["M"]}})
null
So I do an update and add "M" to the letters array on the documents (syntax may not be quite right):
> db.example.update({"_id": "LM"}, {"$addToSet": {"letters": "M"}})
I run the possibility of not having a matching _id, so the findOne would would also return null given the example documents in the collection for this query.
> db.example.findOne({"_id": "AB", "letters": {"$in": ["A"]}})
null
Based on the way I've constructed the above query, I get null back when "A" is not found in letters or the _id of "AB" is not found on any document. In this case I know that this document isn't in there because I know what is in the collection.
What I'd like to do is keep my update query from above with $addToSet and modify it to use upsert WHILE ALSO specifying the document to insert in the event that $addToSet fails due to the document not existing to cut down on database transactions. Is this possible? Or will I have to break up my queries a bit in order to accommodate this?
Because this information may influence answers:
I do my querying through mongo shell and pymongo.
mongo version: 2.6.11
pymongo version: 2.8
Thanks for any help!
EDIT: So after a break and a bit more digging, it seems setOnInsert does what I was looking for. I do believe that this probably solves my issue, but I've not had a chance to test yet.

How can i change _id field in MongoDB Collection to User_id?

I am new user for MongoDB Database. In MongoDb whatever insert into some collection defaultly one field is added that is _id field.
For Example:
db.users.insert({"User_id":"1","User_Name":"xxx","Address":"yyyy"})
db.users.find()
It shows
{ "_id" : ObjectId("528475fc326b403f580d2eba"), "User_id" : "1", "User_Name" : "xxx",Address" : "yyyy" }
I don't need _id field and i want to replace that _id field to User_id with auto increment values.
Is it possible. Please help me. Thanks in advance.
_id field is really special in mongodb. This is your primary key there and there is no way you can have a document without it. Even if you are trying to insert the document without it, mongo will create it for you (as in your example). Moreover, you can not even modify _id field for you collection.
But you can create a document with your own _id. So if you want you can do db.users.insert({"_id":"1","User_Name":"xxx","Address":"yyyy"}) \\why exactly 1 is a string?
and remember that _id means user_id and also keep in mind that this _id should be unique
Keep in mind that mongodb is not like sql. It does not have autoincrement keys (by this I mean that it is not that creators did not know how to do it, but just that you can leave pretty much without it), but you can achieve create something that would resemble the same behaviour.
As for as I can understand your problem is that you want to use your mongoDB internal _id as your custom attribute. For example suppose the db contain the user Identity and having attributes like "_id , name , address ..." and you want to use this _id's value in your application as userId for external reference.
So as #SalvadorDali said _id field is really important in the mongoDB and you can not have a document without it. All you can do is let the db store the value by it's default _id but you can access outside using your own User_id by applying these two changes in your json file.
"properties": {
"userId":{
"type": "string",
"id":"true",
"index":"true",
"description": "unique id of identity"
}
}
now you store any unique value, it is stored in the db using default _id and outside you can have that value in userId field.
Correct me if i got your question wrong.