I'm working on a project where users can post things. But, I'm wondering if my firebase database structure is efficient. Below is how my database looks like so far. I have posts child contains all the post that users will post. and each user will be able to track their own posts by having posts child in uid. Is there a better way of structuring my data? or am I good to go? Any advice would be appreciated!
{
"posts" : {
"-KVRT-4z1AUoztWnF-pe" : {
"caption" : "",
"likes" : 0,
"pictureUrl" : "https://firebasestorage.googleapis.com/v0/b/cloub-4fdbd.appspot.com/o/users%2FufTgaqudXeUciW5bGgCSfoTRUw92%2F208222E1-8E20-42A0-9EEF-8AF34F523878.png?alt=media&token=9ec5301e-d913-44ee-81d0-e0ec117017de",
"timestamp" : 1477946376629,
"writer" : "ufTgaqudXeUciW5bGgCSfoTRUw92"
}
},
"users" : {
"ufTgaqudXeUciW5bGgCSfoTRUw92" : {
"email" : "Test1#gmail.com",
"posts" : {
"-KVRT-4z1AUoztWnF-pe" : {
"timestamp" : 1477946376677
}
},
"profileImageUrl" : "https://firebasestorage.googleapis.com/v0/b/cloub-4fdbd.appspot.com/o/profile_images%2F364DDC66-BDDB-41A4-969E-397A79ECEA3D.png?alt=media&token=c135d337-a139-475c-b7a4-d289555b94ca",
"username" : "Test1"
}
}
}
Working with NoSql Data , your need to take care of few things:-
Avoid Nesting The Data Too Deep
Flatten your dataStructure as possible
Prefer Dictionaries
Security Rules [Firebase]
Try this structure:-
users:{
userID1: {..//Users info..
posts:{
postID1: true,
postID2: true,
postID3: true,
}
},
userID2: {..//Users info..},
userID3: {..//Users info..},
userID4: {..//Users info..},
},
posts: {
userID1 :{
postID1: {..//POST CONTENT },
postID2: {..//POST CONTENT },
postID3: {..//POST CONTENT },
}
}
Keep the data flat and shallow. Store data elsewhere in the tree rather than nest a branch of data under a node that is simply related, duplicating the data if that helps to keep the tree shallow.
The goal is to have fast requests that return only data you need. Consider that every time the tree changes and the client-side listener fires the node and all its children are communicated to the client. Duplication of data across the tree facilitates quick requests with minimal data.
This process of flattening the data is known as "denormalization" and this section of the Firebase Doc does a nice job of providing guidance:
https://firebase.google.com/docs/database/android/structure-data
In your example above I see posts metadata nested under "users", a nested list that grows. Every time something changes under "users" the listener will fire to update the client and all of this data will be transmitted in each response. You could instead consider to fetch the posts data from the "posts" node based on the writer's uuid.
Related
I am working on GraphQL mutation and need help here. My document looks like
{
"_id" : ObjectId("5bc02db357146d0c385d4988"),
"item_type" : "CategoryMapping",
"id" : null,
"CategoryGroupName" : "Mystries & Thriller",
"CustomCategory" : [
{
"name" : "Private Investigator",
"MappedBisacs" : [
"investigator",
"Privately owned",
"Secret"
]
},
{
"name" : "Crime Investigator",
"MappedBisacs" : [
"crime investigator",
"crime thriller"
]
}
]
}
UI
Allow user to update MappedBisacs through list of checkbox. So user can add/update or delete list of bisacs.
Problem - When client send GraphQL query like following;
mutation {
CategoryMapping_add(input: {CategoryGroupName: "Mystries & Thriller", CustomCategory: [{name: "Crime Investigator", MappedBisacs: ["investigator", "dafdfdaf", "dafsdf"]}]}) {
clientMutationId
}
}
I need to find Specific custom category and update its bisac array.
I am not sure if I got it, but this more a doubt on MongoDb than on GraphQL itself. First you must find the document that you want (I would use the id of the document instead of CategoryGroupName), then you can update this array in several ways. For example, after you found the document, you could simply access the array content and spread into a new one adding this new data from your mutation, and save this object with the update method. (if you simply want to add new data without removing any)
So, it depends on the case.
Check: https://docs.mongodb.com/manual/reference/operator/update-array/
Hope it helps! :)
We are trying to represent this data in a web application.What will be the appropriate way to represent this data? We thought of using relational structure but data are hierarchical in nature.Is it better to use MongoDB in this scenario ?
As per comment mongo dynamic schemas is a perfect solution.
let assume that we have our document structured like this:
report {
_id...
documentBycode {
_id : "G8",
dataSource,
remarks,
fields : [{
indicator : "sucide",
baseline {
data,
year,
source
},
milestone[{
year : 2017,
value : 15
}, {}
]
...
...
fields : [{
name : "nfhs1996",
value : "n/a",
order : 1 /* this is not mandatory*/
}, {
name : "ndhs2011",
value : "n/a",
order : 2
}
]
]
}
}
then you can add/modify elements as needed inside [arrays] and always get report data by retrieving only one document from datastore.
What's also could be interesting you could have mulitple diffrent reports structures stored in same collection - so you can literally store full ViewModel data AS IS
Any comment welcome!
Am looking to build a blogging system and came across the following blog.
http://blog.mongolab.com/2012/08/why-is-mongodb-wildly-popular/
While it's nice to see how we can store everything in one Mongo document as a json type object (example json from the blog pasted below) rather than distributing data across multiple tables, I'm having trouble understanding how this can accommodate an hypothetically super long comment thread.
{
_id: 1234,
author: { name: "Bob Davis", email : "bob#bob.com" },
post: "In these troubled times I like to …",
date: { $date: "2010-07-12 13:23UTC" },
location: [ -121.2322, 42.1223222 ],
rating: 2.2,
comments: [
{ user: "jgs32#hotmail.com",
upVotes: 22,
downVotes: 14,
text: "Great point! I agree" },
{ user: "holly.davidson#gmail.com",
upVotes: 421,
downVotes: 22,
text: "You are a moron" }
],
tags: [ "Politics", "Virginia" ]
}
Aside from the comments key which is represented as an array of comment objects, allowing us to store an endless number of comments within this document rather than on a separate comments table requiring a join operation to relate if we are to do this with a relational database, the rest of the fields (ie author, post, date, location, rating, tags) can all be done as columns on a relational database table as well.
Since there is a limit of 16MB per document, what happens when this blog attracts a lot of comments?
Also, why can't I store a json object on a relational database column? Afterall it's a text isn't it?
First, a clarification: MongoDB actually stores BSON, which is a essentially superset of JSON that supports more data types.
Since there is a limit of 16MB per document, what happens when this blog attracts a lot of comments?
You won't be able to increase the size past 16MB, so you'll lose the ability to add more comments. But you don't need to store all the comments on the blog post document. You could store the first N, then retire old comments to a comments collection as new ones are added. You could store comments in another collection with a parent reference. The way comments are stored should jive with how you expect them to be used. 16MB of comments would really be a lot - you might even have a special solution to handle the occasional post that gets that kind of activity, an approach that's totally different from the normal way of handling comments.
We can store json in a relational database. So what is the value of Mongo I'm getting?
Here's two ways of storing JSON (in MongoDB).
> db.test.drop()
> db.test.insert({ "name" : { "first" : "Yogi", "last" : "Bear" }, "location" : "Yellowstone", "likes" : ["picnic baskets", "PBJ", "the great outdoors"] })
> db.test.findOne()
{
"_id" : ObjectId("54f9f41f245e945635f2137b"),
"name" : {
"first" : "Yogi",
"last" : "Bear"
},
"location" : "Yellowstone",
"likes" : [
"picnic baskets",
"PBJ",
"the great outdoors"
]
}
var jsonstring = '{ "name" : { "first" : "Yogi", "last" : "Bear" }, "location" : "Yellowstone", "likes" : ["picnic baskets", "PBJ", "the great outdoors"] }'
> db.test.drop
> db.test2.insert({ "myjson" : jsonstring })
> db.test2.findOne()
{
"_id" : ObjectId("54f9f535245e945635f2137d"),
"myjson" : "{ \"name\" : { \"first\" : \"Yogi\", \"last\" : \"Bear\" }, \"location\" : \"Yellowstone\", \"likes\" : [\"picnic baskets\", \"PBJ\", \"the great outdoors\"] }"
}
Can you store and use JSON the first way using a relational database? How useful is JSON stored in the second way compared to the first?
There's lots of other differences between MongoDB and relational databases that make one better than the other for various use cases - but going further into that is too broad for an SO answer.
Can you store and use JSON the first way using a relational database?
How useful is JSON stored in the second way compared to the first?
Sorry are you suggesting that with Mongo json documents can be stored without using escape characters, whereas with a RDBMS I must use escape characters to escape the double quotes? I wasn't aware of that's the case.
I have two MongoDB collections user and customer which are in one-to-one relationship. I'm new to MongoDB and I'm trying to insert documents manually although I have Mongoose installed. I'm not sure which is the correct way of storing document reference in MongoDB.
I'm using normalized data model and here is my Mongoose schema snapshot for customer:
/** Parent user object */
user: {
type: Schema.Types.ObjectId,
ref: "User",
required: true
}
user
{
"_id" : ObjectId("547d5c1b1e42bd0423a75781"),
"name" : "john",
"email" : "test#localhost.com",
"phone" : "01022223333",
}
I want to make a reference to this user document from the customer document. Which of the following is correct - (A) or (B)?
customer (A)
{
"_id" : ObjectId("547d916a660729dd531f145d"),
"birthday" : "1983-06-28",
"zipcode" : "12345",
"address" : "1, Main Street",
"user" : ObjectId("547d5c1b1e42bd0423a75781")
}
customer (B)
{
"_id" : ObjectId("547d916a660729dd531f145d"),
"birthday" : "1983-06-28",
"zipcode" : "12345",
"address" : "1, Main Street",
"user" : {
"_id" : ObjectId("547d5c1b1e42bd0423a75781")
}
}
Remember these things
Embedding is better for...
Small subdocuments
Data that does not change regularly
When eventual consistency is acceptable
Documents that grow by a small amount
Data that you’ll often need to perform a second query to fetch Fast reads
References are better for...
Large subdocuments
Volatile data
When immediate consistency is necessary
Documents that grow a large amount
Data that you’ll often exclude from the results
Fast writes
Variant A is Better.
you can use also populate with Mongoose
Use variant A. As long as you don't want to denormalize any other data (like the user's name), there's no need to create a child object.
This also avoids unexpected complexities with the index, because indexing an object might not behave like you expect.
Even if you were to embed an object, _id would be a weird name - _id is only a reserved name for a first-class database document.
One to one relations
1 to 1 relations are relations where each item corresponds to exactly one other item. e.g.:
an employee have a resume and vice versa
a building have and floor plan and vice versa
a patient have a medical history and vice versa
//employee
{
_id : '25',
name: 'john doe',
resume: 30
}
//resume
{
_id : '30',
jobs: [....],
education: [...],
employee: 25
}
We can model the employee-resume relation by having a collection of employees and a collection of resumes and having the employee point to the resume through linking, where we have an ID that corresponds to an ID in th resume collection. Or if we prefer, we can link in another direction, where we have an employee key inside the resume collection, and it may point to the employee itself. Or if we want, we can embed. So we could take this entire resume document and we could embed it right inside the employee collection or vice versa.
This embedding depends upon how the data is being accessed by the application and how frequently the data is being accessed. We need to consider:
frequency of access
the size of the items - what is growing all the time and what is not growing. So every time we add something to the document, there is a point beyond which the document need to be moved in the collection. If the document size goes beyond 16MB, which is mostly unlikely.
atomicity of data - there're no transactions in MongoDB, there're atomic operations on individual documents. So if we knew that we couldn't withstand any inconsistency and that we wanted to be able to update the entire employee plus the resume all the time, we may decide to put them into the same document and embed them one way or the other so that we can update it all at once.
In mongodb its very recommended to embedding document as possible as you can, especially in your case that you have 1-to-1 relations.
Why? you cant use atomic-join-operations (even it is not your main concern) in your queries (not the main reason). But the best reason is each join-op (theoretically) need a hard-seek that take about 20-ms. embedding your sub-document just need 1 hard-seek.
I believe the best db-schema for you is using just an id for all of your entities
{
_id : ObjectId("547d5c1b1e42bd0423a75781"),
userInfo :
{
"name" : "john",
"email" : "test#localhost.com",
"phone" : "01022223333",
},
customerInfo :
{
"birthday" : "1983-06-28",
"zipcode" : "12345",
"address" : "1, Main Street",
},
staffInfo :
{
........
}
}
Now if you just want the userinfo you can use
db.users.findOne({_id : ObjectId("547d5c1b1e42bd0423a75781")},{userInfo : 1}).userInfo;
it will give you just the userInfo:
/* 0 */
{
"name" : "john",
"email" : "test#localhost.com",
"phone" : "01022223333"
}
And if you just want the **customerInfo ** you can use
db.users.findOne({_id : ObjectId("547d5c1b1e42bd0423a75781")},{customerInfo : 1}).customerInfo;
it will give you just the customerInfo :
/* 0 */
{
"birthday" : "1983-06-28",
"zipcode" : "12345",
"address" : "1, Main Street"
}
and so on.
This schema has the minimum hard round-trip and actually you are using mongodb document-based feature with best performance you can achive.
Currently in my database I have messages objects set up as the following.
{
"name" : "System",
"message" : "Sean Callahan has entered the room.",
"time" : 1406479167270,
"type" : "system_message",
"room" : "helloroom",
"_id" : "4yeHzhHAQmGJNtHww"
}
I want to basically migrate my data so that every message has a roomId that point it at the appropriate room. Currently this is done by the with the room attribute, which I know see the fault in my ways for various reasons.
My room objects are setup something like this.
{
"_id:" xxxxxxxxx
"room_name:" "testingroom"
}
So I was hoping there was a way to run a one-liner that would just add the correct roomId to every current message based on the current room attribute that is set
I was thinking something along the lines of..
db.messages.update({}, {$set: {roomId: db.rooms.findOne({room_name: room})._id}})
As of now, I am getting room is not defined, which makes perfect sense. But I can't seem to get it right, and this may just not be possible in a one-line query.
As you discovered, this isn't possible in a one-line query since you need to join data from two collections.
Here's an example of how to add the missing field in the mongo shell:
db.messages.find(
{ roomId: { $exists: false} }
).forEach(function(room) {
var roomId = db.rooms.findOne({room_name: room.room});
if (roomId._id) {
db.messages.update(
{ _id: room._id },
{ $set: { roomId: roomId._id }}
)
}
})
You could tidy this up with some error checking, and for updates on a large collection consider using the Bulk Update API (only available in MongoDB 2.6+).