I am designing a schema for doctor's appointments. The doctor will be given the option to update his/her timings for individual days of the month. Also there is no limit to months. For instance, the doctors will be able to update the timings for any days of future or current month. (Only previous dates will be disabled). The front end part has been done but what I fail to understand is how to create the mongo model for it. Of course I can not have dates for all months stored in the model. What is the method to address this problem? TIA
If it was me that had such project, I would start with 5 collections:
one for users (so you know who did what)
one for patients (recording all about the patient, including mobile number)
one for doctors (so you can show a list while registering time)
one for time registrations (with all details of the registration)
one for logging everything a user does
so you can go back in time and to know how did what... it's never to point the finger to the person that made the mistake but look at it as a very easy way to find out what it happened and how can you prevent it from happening again.
the content of each document is entirely up to you as that will change with exactly what and how you are doing
I would think about something between these lines:
// Users
{
username, hashedPassword, // credentials
permissions: [], // for when you start using permissions
active, // never delete data, just set the flag to "false", if, by GDPR rules you need to delete, you can change the name to "DELETED BY GDPR" and still maintain all references
}
// Patients
{
name,
address: { street, street2, city, zipcode, country }, // for invoicing proposes
mobile, // so you send them an SMS 24h before saying their appointment is "tomorrow"
}
// Doctors
{
name,
weekAvailability: [], // days of the week that a doctor is available as they normally work in more than one clinique
active, // never delete data, just set the flag to "false", if, by GDPR rules you need to delete, you can change the name to "DELETED BY GDPR" and still maintain all references
}
// Logs
{
action, // for example, "save", "add", "change"...
entity, // the collection name that the change happened
originalEntry, // the full document before changes
newEntry, // the full document after changes
timestamp, // the exact time of the change
user, // ref to users
}
// TimeRegistrations
{
user, // ref to users
patient, // ref to patients
doctor, // ref to doctors
title, description, appointmentStart, durationInMinutes,
}
regarding the infrastructure ... create an API (REST or GRAPHQL, the one you're most comfortable with) so you can separate the business logic from the frontend right from the start
your frontend (maybe React, Angular, VueJs) should call a proxy (nodeJs server running aside the frontend) to make authentications and call the API so all you should do in the frontend to be something like
fetch('/api/doctors')
.then(res => res.toJson())
.then(json => {
this.doctorsList = json
})
same as for authentication os a user where you can easily make use of a library to provide you with a JWT and easily maintain user logged in and with the right set of permissions
First approach but not good in your case i.e. One collection,
//doctors
{
_id: "",
appointments: [] // all appointments there
}
Second will be better but note in NoSql collection totally depends on how you want to get the data. Two collections:
//doctors
{
_id: "SOMETHING",
name: "SOMETHING"
}
//appointments
{
_id: "SOMETHING",
doctorId: "", // ref of doctor collection
appointmentAt: "",
appointmentAtMilli: "",
}
Related
I have a table Users and Matters. Ordinarily only admins can create a matter so I did a one to many relationship between users and matters(User.hasMany(Matter)) and (Matter.belongsTo(User))
Now on the frontend, Matter is supposed to have a multi-select field called assignees where users gotten from the User table can be selected.
My current approach is to make assignees a column on Matter which will be an array of user emails selected on the frontend but the frontend developer thinks I should make it an array of user ids instead but I think that won't be efficient because when getting all matters or updating them, one will need to run a query each time to get the associated assignees using the array of ids stored in the assignees column(and I am not entirely sure on how to go about that).
Another option is having a UserMatters join table but I don't think it will be performant-friendly to populate two tables(Matter and UserMatters) on creation of a matter while updating and getting all matters will involve writing lots of code.
My question is, is there a better way to go about this or should I just stick with populating the assignees field with user emails since it looks like a better approach as far as I can see?
N.B: I am using sequelize(postgres)
So what I did was instead of creating a through/join table, the frontend sent an array of integers containing the IDs of the assignees. Since the assignees are also users on the app, when I get a particular matter, I just loop through the IDs in the assignees column which is an array and get the user details from the user table then I reassign the result to that column and return the resource.
try {
// get the resource
const resource = await getResource(id);
/*looping through the column containing an array of IDs and converting it to numbers(if they are strings)**/
let newArr = resource.assignees.map(id => Number(id));
let newAssignees;
/**fetch all users with the corresponding IDs and wait for that process to be complete**/
newAssignees = await Promise.all(newArr.map(id => getUserById(id)));
/**Returns only an array of objects(as per sequelize)**/
newAssignees.map(el => el.get({ raw: true }));
/**reassign the result**/
resource.assignees = newAssignees;
if(resource)
return res.status(200).json({
status: 200,
resource
})
} else {
return res.status(404).json({
status: 404,
message: 'Resource not found'
});
}
} catch(error){
return res.status(500).json({
status: 500,
err: error.message
})
}
I currently get to work with DynamoDB and I have a question regarding the structure I should choose.
I setup Twilio for being able to receive WhatsApp messages from guests in a restaurant. Guests can send their feedback directly to my Twilio WhatsApp number. I receive that feedback via webhook and save it in DynamoDB. The restaurant manager gets a Dashboard (React application) where he can see monitor the feedback. While I start with one restaurant / one WhatsApp number I will add more users / restaurants over time.
Now I have one of the following two structures in mind. With the first idea, I would always create a new item when a new message from a guest is sent to the restaurant.
With the second idea, I would (most of the time) update an existing entry. Only if the receiver / the restaurant doesn't exist yet, a new item is created. Every other message to that restaurant will just update the existing item.
Do you have any advice on what's the best way forward?
First idea:
PK (primary key), Created (Epoc time), Receiver/Restaurant (phone number), Sender/Guest (phone number), Body (String)
Sample data:
1, 1574290885, 4917123525993, 4916034325342, "Example Message 1" # Restaurant McDonalds (4917123525993)
2, 1574291036, 4917123525993, 4917542358273, "Example Message 2" # different sender (4917542358273)
3, 1574291044, 4917123525993, 4916034325342, "Example Message 3" # same sender as pk 1 (4916034325342)
4, 1574291044, 4913423525123, 4916034325342, "Example Message 4" # Restaurant Burger King (4913423525123)
Second idea:
{
Receiver (primary key),
Messages: {
{
id,
Created,
From,
Body
}
}
}
Sample data (same data as for first idea, but different structured):
{
Receiver: 4917123525993,
Messages: {
{
Created: 1574290885,
Sender: 4916034325342,
Body: "Example Message 1"
},
{
Created: 1574291036,
Sender: 4917542358273,
Body: "Example Message 2"
},
{
Created: 1574291044,
Sender: 4916034325342,
Body: "Example Message 3"
}
}
}
{
Receiver: 4913423525123,
Messages: {
{
Created: 1574291044,
Sender: 4916034325342,
Body: "Example Message 4"
}
}
}
If I read this correctly, in both approaches, the proposal is to save all messages received by a restaurant as a nested list (the Messages property looks like an object in the samples you've shared, but I assume it is an array since that would make more sense).
One potential problem that I foresee with this is that DynamoDB documents have a limitation on how big they can get (400kb). Agreed this seems like a pretty large number, but you're bound to reach that limit pretty quickly if you use this application for something like a food order delivery system.
Another potential issue is that querying on nested objects is not possible in DynamoDB and the proposed structure would mostly involve table scans for any filtering, greatly increasing operational costs.
Unlike with relational DBs, the structure of your data in document DBs is dependent heavily on the questions you want to answer most frequently. In fact, you should avoid designing your NoSQL schema unless you know what questions you want to answer, your access patterns, and your data volumes.
To come up with a data model, I will assume you want to answer the following questions with your table :
Get all messages received by a restaurant, ordered by timestamp (ascending / descending can be determined in the query by specifying ScanIndexForward = true/false
Get all messages sent by a user ordered by timestamp
Get all messages sent by a user to a restaurant, ordered by timestamp
Consider the following record structure :
{
pk : <restaurant id>, // Partition key of the main table
sk : "<user id>:<timestamp>", // Synthetic (generated) range key of the main table
messageBody : <message content>,
timestamp: <timestamp> // Local secondary index (LSI) on this field
}
You insert a new record of this structure for each new message that comes into your system. This structure allows you to :
Efficiently query all messages received by a restaurant ID using only the partition key
Efficiently retrieve all messages received by a restaurant and sent by a user using pk = <restaurant id> and begins_with(sk, <user id>)
The LSI on timestamp allows for efficiently filtering messages based on creation time.
However, this by itself does not allow you to query all messages sent by a user (to any restaurant, or a specific restaurant). To do that we can create a global secondary index (GSI), using the table's sk property (containing user IDs) as the GSI's primary key, and a synthetic range key that consists of the restaurant ID and timestamp separated by a ':'.
GSI structure
{
gsi_pk: <user Id>,
gsi_sk: "<dealer Id>:<timestamp>",
messageBody : <message content>
}
messageBody is a non key field projected on to the GSI
The synthetic SK of the GSI helps make use of the different key matching modes that DynamoDB provides (less than, greater than, starts with, between).
This GSI allows us to answer the following questions:
Get all messages by a user (using only gsi_pk)
Get all messages by a user, sent to a particular restaurant (ordered by timestamp) (gsi_pk = <user Id> and begins_with(gsi_sk, <restaurant Id>)
The system has a some duplication of data, but that is in line with one of the core ideas of DynamoDB, and most NoSQL databases. I hope this helps!
Storing multiple message in a single record has multiple issues
Size of write to db will increase as we go. (which will translate to money and response time, worst case you may end up hitting 400kb limit.)
Race condition between multiple writes.
No way to aggregate messages by user and other patterns.
And the worse part is that, I don't see any benefit of storing multiple messages together. (Other than may be I can query all of them together, which will becomes a con as size grows, like you will not be able to do get me last 10 reviews, you will always have to fetch all and then fetch last 10.)
Hence go for option where all the messages are stored differently.
In order to include information (like an order number) in a survey using an Email Collector, it's my understanding that this information needs to be stored in the Contact's custom variables. My concern is what happens if I am sending something like a customer satisfaction survey that needs to reference the order number, and the same customer (email address) places more than one order, and I have to send out more than one survey.
Will the custom values that are returned with the collectors/.../responses API call include the custom values at the time of the survey invite? Or will these be set to current values?
The custom values are stored on the response at the time the survey is taken. So if they change later, they will not change on the response. This will work fine as long as you don't sent out another survey with new custom values to the same contact before they respond to the previous one.
Just an FYI, there is also an option to set extra_fields on a recipient when adding recipients to an email collector (rather than on the contact).
POST /v3/collectors/<collector_id>/messages/<message_id>/recipients
{
"email": "test#example.com",
"extra_fields": {
"field1": "value1",
"field2": "value2"
}
}
I don't believe that data is stored with he response, but the recipient_id is and you can fetch the recipient by ID to get that data back.
Those are two options, you can see which one works best for you. The benefit of contact custom values is that you can view them and edit them from the web, whereas extra_fields are API only fields.
Hopefully I can describe this correctly but I come from the RDBMS world and I'm building an inventory type application with Meteor. Meteor and Mongodb may not be the best option for this application but hopefully it can be done and this seems like a circumstance that many converts will run into.
I'm trying to forget many of the things I know about relational databases and referential integrity so I can get my head wrapped around Mongodb but I'm hung up on this issue and how I would appropriately find the data with Meteor.
The inventory application will have a number of drop downs but I'll use an example to better explain. Let's say I wanted to track an item so I'll want the Name, Qty on Hand, Manufacturer, and Location. Much more than that but I'm keeping it simple.
The Name and Qty on Hand are easy since they are entered by the user but the Manufacturer and the Location should be chosen in a drop down from a data driven list (I'm assuming a Collection of sorts (or a new one added to the list if it is a new Manufacturer or Location). Odds are that I will use the Autocomplete package as well but the point is the same. I certainly wouldn't want the end user to misspell the Manufacturer name and thereby end up with documents that are supposed to have the same Manufacturer but that don't due to a typo. So I need some way to enforce the integrity of the data stored for Manufacturer and Location.
The reason is because when the user is viewing all inventory items later, they will have the option of filtering the data. They might want to filter the inventory items by Manufacturer. Or by Location. Or by both.
In my relational way of thinking this would just be three tables. INVENTORY, MANUFACTURER, and LOCATION. In the INVENTORY table I would store the ID of the related respective table row.
I'm trying to figure out how to store this data with Mongodb and, equally important, how to then find these Manufacturer and Location items to populate the drop down in the first place.
I found the following article which helps me understand some things but not quite what I need to connect the dots in my head.
Thanks!
referential data
[EDIT]
Still working at this, of course, but the best I've come up with is to do it normalized way much like is listed in the above article. Something like this:
inventory
{
name: "Pen",
manufacturer: id: "25643"},
location: {id: "95789"}
}
manufacturer
{
name: "BIC",
id: "25643"
}
location
{
name: "East Warehouse",
id: "95789"
}
Seems like this (in a more simple form) would have to be an extremely common need for many/most applications so want to make sure that I'm approaching it correctly. Even if this example code were correct, should I use an id field with generated numbers like that or should I just use the built-in _id field?
I've come from a similar background so I don't know if I'm doing it correctly in my application but I have gone for a similar option to you. My app is an e-learning app so an Organisation will have many Courses.
So my schema looks similar to yours except I obviously have an array of objects that look like {course_id: <id>}
I then registered a helper than takes the data from the organisation and adds in the additional data I need about the courses.
// Gets Organisation Courses - In your case could get the locations/manufacturers
UI.registerHelper('organisationCourses', function() {
user = Meteor.user();
if (user) {
organisation = Organisations.findOne({_id: user.profile.organisation._id});
courses = organisation.courses.courses;
return courses;
} else {
return false;
}
});
// This takes the coursedata and for each course_id value finds and adds all the course data to the object
UI.registerHelper('courseData', function() {
var courseContent = this;
var course = Courses.findOne({'_id': courseContent.course_id});
return _.extend(courseContent, _.omit(course, '_id'));
});
Then from my page all I have to call is:
{{#each organisationCourses}}
{{#with courseData}}
{{> admListCoursesItem}}
{{/with}}
{{/each}}
If I remember rightly I picked up this approach from an EventedMind How-to video.
I have a Book model that has the property upVotes. Book instances can be queried from the database (MongoDB), modified, and then saved. If a user upvotes a book, I update the upVotes count, and save the whole model back to the server.
The problem is that if someone else votes between the time the instance is loaded, and the time the instance is saved, then the two votes will be saved as just one vote. What I need is an easy way to say "increment the model by 1 server-side", instead of "increment the model by 1 client-side and hope there will be no conflict".
You don't have to save the whole model to the server just to change one thing, you can (and should in this case) add an upVote method to your model that does an "increment upvotes" AJAX call to your server. In your model you'd have something like this:
upVote: function() {
var self = this;
$.ajax({
url: '/some/upvote/path',
type: 'POST',
success: function(data) {
self.set('upVotes', data.upVotes);
},
// ...
});
}
And then the view would have this to handle the upvote action:
upVote: function() {
// Highlight the upvote button or provide some other feedback that
// the upvote has been seen.
this.model.upVote();
}
and you'd probably have a listener for change events on the model's upVotes property to properly increment the displayed upvote counter (if you have such a thing).
Furthermore, your /some/upvote/path on the server would just send an $inc update into MongoDB to avoid the same "two things happening at once" problem on your server. If you were using a relational database, you'd want to end up doing something like update t set upvotes = upvotes + 1 where id = ?.
There is no need for a "query, update, save" round trip on either the client or the server for a simple increment operation. Instead, treat the increment as a single increment operation and push that increment all the way down to your final persistent data storage layer.