I am working on an application which will have users.. who create posts.. and other users can like/comment on any post.
I am trying to figure out a best way to design db tables for this. I have read the anypics tutorial on parse.com site. They keep all comments and likes in a table called "Activity". (which makes sense) being able to query any type of activity (like/comment) from a separate table without having to touch "posts" table.
My question is- in this scenario how do I fetch all posts that current user created along with likes and comments on each those posts?
Anypic app by parse makes a separate request to fetch number of likes on each post (which I think is not ideal.) I am new to nosql data stores.. so if someone could help me out with suggestion on how to structure data that would be great.
Also, how bad is it to store all likes/comments as an array in the post itself? I think this won't scale but I might be wrong.
Thanks
In terms of Parse, I would use an afterSave Cloud Function to update the Post anytime a like/comment is added.
Have a look at the documentation here, in the most simple case you just create an afterSave for the Activity class that increments the like/comment count.
In a more advanced scenario you might want to handle update/delete too. E.g. if someone can change their 'like' to 'not like' you would need to look at the before/after value and increase/decrease the counter as needed.
I am a fan of storing extra 'redundant' data, and no-sql/document-db systems work well for that. This is based on the idea that writes are done infrequently compared to the number of reads, so doing some extra work during/after the write has less impact and makes the app work more smoothly.
Related
I am building a personal work/career portfolio web app project, and plan on using MongoDB for my database. (I plan to build the project using MERN stack.) Most of my data is not one-time data (such as education, and work experiences), however I have a few pieces of data (such as my personal summary (the content for my "About Me" section), and skills summary) that are one-time only data (I think "single instance" might be a better fitting term). I would like to store all of the data in a database, and set up an admin-end to manage and edit the data. However, I am not sure how to go about storing the one-time data in my MongoDB database.
One idea I had was to create a collection solely for the one-time data, and only allow the user (me) to update and read the documents in the collection. Another idea I had was placing all of my portfolio data into a single collection called "entries", and giving each "entry" a type (such as "Education", or "Personal Summary"). Then when I retrieve the data from the collection I would gather all the documents with the same value in their type field together. I was thinking of storing each of the types as a constant on my server. However, my biggest concern with both ideas is if they would be considered bad practice of not.
I would be very appreciative if anyone has any advice on how to solve this problem.
I had implemented this a while back on one of my small projects, and again after discussing it over with some professionals I'm in contact with, they said that the best approach would be to create a collection with a single document that contains all the information, like the links, about, etc...
One more thing I, was suggested is that we could use Redis solely for the purpose of storing this type of information as well.
Something that I implemented a long time back similar to the one collection, single doc approach: https://github.com/codelancedevs/Sundar-Clinic/tree/local-backend/src/api/app
Working on a similar approach here: https://github.com/kunalkeshan/Cam-O-Genics-Backend
Hope this is of some help, I'm still learning as to what might be the best approach. Open to any suggestions out there!
I am currently watching a how-to create an instagram clone for Swift and want to understand the data model for the comments.
What is the purpose of using a model for the comments like:
post-comment (key = post-id) and comments
over something like this, where every comment has the post-id in it?
Without knowing what exactly they're building, and the types of queries they need to support for the app, one can only guess that this post-comments collection satisfies the need for a query to find out which comments are a part of which posts, while still allowing queries that search all posts or all comments. You should find the part of the tutorial that queries this collection to find out what it's trying to do.
This tutorial might be kind of old, because this sort of thing would be a little bit easier to express today using collection group queries.
I have a lot of existing data that I would like to use as training data for a wit.ai chatbot. The data is stored in a csv file where each row has a statement/question and a response to that statement/question.
I know that wit.ai requires you to assign intents to comments made and so I'm wondering if there is a way to simply send over the data I have and have the chatbot start learning intents on its own.
Thanks!
Thanks for posting. We know this is not perfect yet but we release an import/export feature a few days ago. Looking at the structure of the json export, one can probably easily feed with existing data. It would require creating one story per statement/question and a response. More info here:
https://wit.ai/docs/recipes#copyexportversion-my-app
"Teaching" Wit.Ai is not exactly what some might think it is.
You will have to create stories for your User says column. The replies are irrelevant to be honest. You can't "teach" wit.ai to reply. Replies are defined in the story or in your code.
What wit.ai might need from your data are keywords and key-phrases which make the entity recognition better for wit.ai.
Here is the simplest example:
Entity color is recognized based on keywords listed. So if you have a lot of data as an example of user input - you can try to break it down first into "which entities which user input should produce" and then keywords from those input.
Using your data for "teaching" - would be a little difficult since it will require you to create a lot of Stories in wit.ai to cover possible user input and entity identification. But you can still do it like this:
(rough example)
Make one story about user asking the time for example
Mark in the user input which entities should be derived from that input:
Sort your list you have to get all possible way of asking for the time:
How late is it?
Can you tell me the time?
I wonder what's the time now?
Use a script (Python) to "shoot" all these user inputs at your story.
Once done - go to Understanding time of wit.ai and go through all input correcting\adding the entities you defined.
This process will "teach" entities if they are keywords based or some other algorithm.
That's the best I can think of about how to use your existing data. Wit.Ai is different from other language processing tool-sets and "teaching" it with existing data is somewhat "puzzling" :)
I want to store data for pages visited to my site by users. If each page has a page_id, and each user has both a session_id and user_id, would using something like Redis be overkill, or should i just go with a basic text log file?
The reason I am thinking about a session_id is to ignore visits to the same page from the same session_id. I want to track the user_id so I'll be able to build a graph of users to viewed pages.
The app uses MySQL, and I can certainly use that, I'm only considering Redis because of its fast write speed. I can also use MongoDB, but if these are unnecessary approaches then I don't have to. Theoretically, the size of these metrics could grow quite large--at that point, is Redis a viable option? Just trying to figure out what you have had success with.
Read some the the relatedreal-world use-case on MongoDB site here. 10gen.com/use-case/high-volume-data-feeds That can give you some insight
I'm trying to model a simple, experimental app as I learn Symfony and Doctrine.
My data model requires some flexibility, so I'm currenty looking into the possibility of using either an EAV model, or document store in MongoDB.
Here's my basic requirements:
Users will be able to store and share their favourite things (TV prog, website, song etc).
The list of possible 'things' a user can store is unknown. For example, a user may want to store their favourite animal.
Users can share their favourite things with other users. However, a user can decide what he / she shares with each other user. For example, a user may share their favourite movie with one user, but not another.
A typical user will log in and view all the favourite things from their list of friends, depending on what his friends have decided to share. The user will also update their own favourite things, which will be reflected when each other users views their own profile. Finally, the user may change which of his friends can see what of his favourite thing.
I've worked a lot with Magento, which uses the EAV model extensively. However, I'm adding another layer of complexity by restricting which users can see what information.
I'm instantly drawn to MongoDB as the schemaless format gives me the flexibility I require. However, I'm not sure how easy (or efficient) it will be to access the data once it's saved. I'm also concerned about how changes to the data will be managed, e.g. a user changes their favourite film.
I'm hoping someone can point me in the right direction. This is purely a demo app I'm building to further my knowledge, but I'm treating it like a real-world app where data access times are super-important.
Modelling this kind of app in a traditional relational DB makes me sweat when I think about the crazy number of joins I'd need to get the data for one user.
Thanks for reading this far, and please let me know if I can provide anymore information.
Regards,
Fish
You need to choose a model based on how you need to access the data.
If you just need to filter out some values when viewing the user profile, a single document for each user would work quite well, with each favorite within that having a list of authorized user/group IDs that is applied in the application code. Both read and write are single operations on a known document in this case, so will be fast.
If you need views across multiple profiles though, your main document should probably be the favorite. You'll need to set up the right indexes, but performance shouldn't be a problem.
Actually, the permissions you describe don't add that much complexity to an EAV schema - as long as attributes can have multiple values the permissions list is just one more attribute.