I want to export "street" : "Downstreet 34" But dont export, if the source value is other than 3
Sample 1 JSON
"addresses" : [ {"source" : 3 , "street" : "Downstreet 34"}]
Export "street" : "Downstreet 34"
Sample 2 JSON
"addresses" : [ {"source" : 2 , "street" : "Downstreet 34"}]
Dont export "street" : "Downstreet 34"
mongoexport --db db_name --collection collection_name --query '{source : 3 , street : "Downstreet 34"}' --out output_file.json
This should run - update the query statement as required. Make required simple changes if not working.
db.collection.find(
{ source: 2 },
{ street: 1}
)
Example that you can use to build queries like these are : source
# SQL QUERY
SELECT user_id, status
FROM users
WHERE status = "A"
#mongoDB Query
db.users.find(
{ status: "A" },
{ user_id: 1, status: 1, _id: 0 }
)
Related
I am having problems exporting subdocuments that are stored in MongoDB to a .CSV.
My data: a mongo collection that contains a unique user ID and scores from personality quizzes.
I would like a CSV that has three columns: user_id, name, raw_score. To add a further layer of complexity, within the 'scales' subdocument some users will have more than two entries (some quizzes produced more than 2 personality scores).
An example of my data minus documents that I am not interested in:
"assessment":{
"user_id" : "5839b1a654842f35617ad100",
"submissions" : {
"results" : {
"scales" : [
{
"scale" : {
"name" : "Security",
"code" : "SEC",
"multiplier" : 1
},
"raw_score" : 2
},
{
"scale" : {
"name" : "Power",
"code" : "POW",
"multiplier" : -1
},
"raw_score" : 3
}
],
}
}
}
}
I have tried using mongoexport but this produces a CSV that only has a user_id column.
rekuss$ mongoexport -d production_hoganx_app -c assessments --type=csv -o app_personality.csv -f user_id,results.scales.scale.name,results.scales.raw_score
Any ideas where I am going wrong?
Please let me know if you need anymore information.
Many thanks
You should try removing '=' sign from type. You could try --type csv
My doc:
db.org.insert({
"id" : 28,
"organisation" : "Mickey Mouse company",
"country" : "US",
"contactpersons" : [{
"title" : "",
"typecontact" : "D",
"mobilenumber" : "757784854",
"firstname" : "Mickey",
"lastname" : "Mouse",
"emailaddress" : "mickey#mouse.com"
},
{
"title" : "",
"typecontact" : "E",
"mobilenumber" : "757784854",
"firstname" : "Donald",
"lastname" : "Duck",
"emailaddress" : "donald#duck.com"
}],
"modifieddate" : "2013-11-21T16:04:49+0100"
});
My query:
mongoexport --host localhost --db sample --collection org --type csv --fields country,contactpersons.0.firstname,contactpersons.0.emailaddress --out D:\info_docs\org.csv
By this query, I'm able to get only the first document values of the contactpersons.But, I'm trying to export the second document values also.
How can I resolve this issue ? Can anyone please help me out regarding this ...
You're getting exactly the first document in contactpersons because you are only exporting the first element of the array (contactpersons.0.firstname). mongoexport can't export several or all elements of an array, so what you need to do is to unwind the array and save it in another collection. You can do this with the aggregation framework.
First, do an $unwind of contactpersons, then $project the fields you want to use (in your example, country and contactpersons), and finally save the output in a new collection with $out.
db.org.aggregate([
{$unwind: '$contactpersons'},
{$project: {_id: 0, org_id: '$id', contacts: '$contactpersons', country: 1}},
{$out: 'aggregate_org'}
])
Now you can do a mongoexport of contacts (which is the result of the $unwind of contactpersons) and country.
mongoexport --host localhost --db sample --collection aggregate_org --type=csv --fields country,contacts.firstname,contacts.emailaddress --out D:\info_docs\org.csv
I'm trying to combine 2 collections into one (not join). I have 2 databases with same collections and collection structure.
As example:
Collection test1 db1:
{
"_id" : ObjectId("574c339b3644a65b36e77359"),
"appName" : "App1",
"customerId" : "Client1",
"environment" : "PROD",
"methods" : []
}
Collection test2 db2:
{
"_id" : ObjectId("574c367d627b45ef0abc00e5"),
"appName" : "App2",
"customerId" : "Client2",
"environment" : "PROD",
"methods" : []
}
I'm trying to create the following:
One collection test db, where the documents will be merged from test1 and test2 but not one with each other. What would be the proper way to achieve this?
{
"_id" : ObjectId("574c339b3644a65b36e77359"),
"appName" : "App1",
"customerId" : "Client1",
"environment" : "PROD",
"methods" : []
},
{
"_id" : ObjectId("574c367d627b45ef0abc00e5"),
"appName" : "App2",
"customerId" : "Client2",
"environment" : "PROD",
"methods" : []
}
The complexity is that ID are referenced in other collection of mongo.
the fastest way will be to create a dump (using mongodump) and restore them at once (example is using windows paths).
mongodump --db test1 --collection test1 --out c:\dump\test1
mongodump --db test2 --collection test2 --out c:\dump\test2
mongorestore --db test3 --collection test3 c:\dump\test1
mongorestore --db test3 --collection test3 c:\dump\test2
I wanna connect mongodb and elasticsearch. I used mongo connector to connect them. I followed instruction from below link to setup==>
http://vi3k6i5.blogspot.in/2014/12/using-elastic-search-with-mongodb.html
I am able to connect mongodb and elasticsearch. But by default mongo connector created indices in elasticsearch for all databases of mongodb.
I want to create only one index for my one database and I want to insert only selected field of documents. for example: in mongo shell==>
use hotels
db.restaurants.insert(
{
"address" : {
"street" : "2 Avenue",
"zipcode" : "10075",
"building" : "1480",
"coord" : [ -73.9557413, 40.7720266 ],
},
"borough" : "Manhattan",
"cuisine" : "Italian",
"grades" : [
{
"date" : ISODate("2014-10-01T00:00:00Z"),
"grade" : "A",
"score" : 11
},
{
"date" : ISODate("2014-01-16T00:00:00Z"),
"grade" : "B",
"score" : 17
}
],
"name" : "Vella",
"restaurant_id" : "41704620"
}
)
This will create database hotels and collection restaurants. Now I want to create index and I want to put only address field in elasticsearch for that index.
Below are the steps what I tried but thats not working :
First I start mongo connector like below :
Imomadmins-MacBook-Pro:~ jayant$ mongo-connector -m localhost:27017 -t localhost:9200 -d elastic_doc_manager --oplog-ts oplogstatus.txt
Logging to mongo-connector.log.
Then from new shell tab, I made command like :
curl -XPUT 'http://localhost:9200/hotels.restaurants/'
curl -XPUT "http://localhost:9200/hotels.restaurants/string/_mapping" - d'{
"string": {
"properties" : {
"address" : {"type" : "string"}
}
}
}'
But only index is created in elasticsearch named as hotels.restaurants. I can't see any document for index hotels.restaurants.
Please suggest me how to add document for hotels.restaurants
Well I got an answer to my question, while starting mongo connector we can specify collection name and the list of fields we are interested in. Please check below command ==>
$ mongo-connector -m localhost:27017 -t localhost:9200 -d elastic_doc_manager --oplog-ts oplogstatus.txt --namespace-set hotels.restaurants --fields address,grades,name
The question is:
Consider the following location: [-72, 42] and the range (circle) of radius 2 around this point. Write a query to find all the states that intersect this range (circle). Then, you should return the total population and the number of cities for each of these states. Rank the states based on number of cities.
I have written this so far:
db.zips.find({loc: {$near: [-72, 42], $maxDistance: 2}})
and a sample output of that is:
{ "city" : "WOODSTOCK", "loc" : [ -72.004027, 41.960218 ], "pop" : 5698, "state" : "CT", "_id" : "06281" }
In SQL i would simply do a group by "state", how would i be able to do that here while also counting all the cities and total population?
assuming you follow the mongoimport routine for its zipcode data (i brought mine into a collection called zips7):
mongoimport --db mydb --collection zips7 --type json --file c:\users\drew\downloads\zips.json
or
mongoimport --db mydb --collection zips7 --type json --file /data/playdata/zips.json
(depending on your OS and paths)
then
db.zips7.ensureIndex({loc:"2d"})
db.zips7.find({loc: {$near: [-72, 42], $maxDistance: 2}}).forEach(function(doc){
db.zips8.insert(doc);
});
note that db.zips7.stats() shows like 30k rows and zips8 has 100 rows
db.zips8.aggregate( { $group :
{ _id : "$state",
totalPop : { $sum : "$pop" },
town_count:{$sum:1}
}}
)
{
"result" : [
{
"_id" : "RI",
"totalPop" : 39102,
"town_count" : 10
},
{
"_id" : "MA",
"totalPop" : 469583,
"town_count" : 56
},
{
"_id" : "CT",
"totalPop" : 182617,
"town_count" : 34
}
],
"ok" : 1
}
Syntax in mongoid
Zips.where(:loc => {"$within" => {"$centerSphere"=> [[lng.to_f,lat.to_f],miles.to_f/3959]}})
Example:
Zips.where(:loc => {"$within" => {"$centerSphere"=> [[-122.4198185,37.7750454],2.0/3959]}})