Is there a inherit properties from parent resources in NestJS? - rest

I'm in the process of making an API for a database with three distinct categories: Systems, Products, Components:
database schema skeleton
Now, each of the above categories will have several tables. For example, there are many different types of components, such as cameras, monitors, power supplies, etc. And each one will have a table in the database.
I'm using NestJS to build an API for this database. One way I could approach doing it is to create a resource for every single table in the database. But the associated CRUD operations for each table within a given category has identical functionality. For example, there is no difference between the Cameras resource and Monitors resource other than the fact that they reside in different tables in the database.
For each resource within the same category, I would just be copy and pasting the same code over and over again, and I am curious if there is a way to overcome this code reuse.
So, I'm wondering if there's a way for me to write my controller, service, and module files for a parent resource called Components, and just inherit this exact functionality within the controller, service, and module files for all the child resources such as Cameras and Monitors?

Related

Can I just use the packages in my package diagram as the entities for my class diagram?

We need to create a booking system that allows rape victims to book sessions with a counsellor (who is a volunteer therefore is not on duty 24/7) online. The organisation used to do the booking process over the phone, writing down important information.
This is the package diagram I created for a project. I am not sure: am I allowed to just use the packages as entities for the class diagram?
A package is a tool to structure models by grouping somehow related pieces into namespaces.
It is not unusual to recognize a decomposition that coincides somehow with larger components (e.g. Client, Application and Data). But it is not correct to use packages as a substitute for a class. It may even look confusing.
It is not a problem to keep enclosing or nested packages such as Booking system in a class diagram. But you should use a proper class box for classes. You would then be able to show not only the properties but also the operations in a different compartment. Last but not least, you could be more precise in the relationships between classes, considering that packages are only related via dependencies and some special package operations, whereas classes can be related also with associations, inheritance, etc..
For example, your diagram tells only that Booking is dependent on Client. And this means the content of one package needs to know about the other packages. But in reality Client and Booking should be associated i.e. an instance of Client would be related for a longer time to some specific instances of Booking. In this case, you'd expect that you could easily navigate from the one to the other. Associations also allow to specify multiplicity, e.g. that one client could have 1 or more bookings, but each booking would be for only one client.
Other remarks, unrelated to the question:
Your comment box suggests that you try to explain the purpose of the system, perhaps for some stakeholders. You may therefore consider using a use-case diagram to show the big picture with the different actors and the goals they want to achieve with the system.
In a class box, you could add an «Entity» stereotype above the name of the class. Entities are domain classes that matter to the users.
Data storage system seems not to fit in the diagram: it's not really an entity. Perhaps it's a class, a component or a package, but not really an entity.

merge - upsert/delete in google cloud datastore

I am working on a POC (to move part of functionality from relational DB to cloud datastore). I have few questions:
I would need to refresh few "kind" every night as the data comes up
from a different data source (via flat files). I read about it, and
understood that there is not TRUNCATE kind of functionality in
datastore. I believe, only option is to retrieve the keys from the
"kind" in a loop and delete entity by entity. And use import functionality to load the new set of data. Is there any better
option?
Assume I have a kind called department, and a kind called
store. Now, I need a kind called dept-store. So for this parent
nodes are department and store. Is there a way to enforce this kind
of relationship? From the documentation I see that there can only be
one parent.
If i have a child entity in kind1 whose parent is
present in kind2, and they are linked together, is there a way to
query all the properties present in kind1 and kind2 together? From
relational DB perspective, it is like equi-join with "SELECT *". I
am looking for an equivalent functionality in datastore.
In order to answer your questions:
There is two ways to delete multiple entities. First, you can use Cloud Dataflow to delete entities in Bulk [1]. Second, once keys are retrieved you can make a batch delete operation by passing the keys to Datastore delete function, you have the usage example here [2]. In order to retrieve the keys you can run keys-only query [3].
In Datastore an entitiy can have only one parent but can have multiple children. But for your use case you may try to have a third kind, dept-store, and assign its properties as the keys of the entities from the department and the store kinds. This solution might need a good understanding of your neeeds for implementation, as Datastore by nature is Non-relational database.
You can lookup multiple entities providing the keys retrieved from kind1 and kind2 with batch operations [2].

Restrict access to database resources in Entity Framework + UoW + Generic Repositories

I'm using ASP.NET MVC3 with Entity Framework 4.
I am using the Unit Of Work + Generic Repository pattern.
I searched for similar question everywhere, I see that many people have my problem, but still can't find a good and practical solution.
We have a multi-tenant database.
Imagine a database with a similar structure:
customers
groups, associated to a customer
users, associated to one or many groups
And then, for each customer we have
resources, associated to one or many groups, and linked between each other with foreign keys, many-to-many relationships and so on
So, when a user logs in, he is associated to one or many groups, and he needs to have access to the parent and child resources associated to that groups.
Now the problem is:
I implemented a sort of pre-filtering with a .Where() clause into the unit of work, in the repositories, based on the id of the logged in user.
And this is working.
The pre-filtering I did on the repositories is working fine, but of course it works only if you access directly the repository of the sources of TYPE A or TYPE B or TYPE C and so on.
But a resource is linked to other resources with many-to-many tables and foreign keys.
So, it happens that sometimes a resource belongs to a group to which the user has access, but sometimes the resources linked to this resource belong to a group to which the user does not have access.
If I traverse the navigation properties of the "parent" resource, the user can access all the linked resources, even the one belonging to other groups.
So, if you are starting from a TYPE A resource, and traverse the navigation properties to reach the TYPE B and TYPE C resources, they are not filtered.
If you access the TYPE B and TYPE C repositories, they are filtered.
Now my filters, as I said before, are in the Unit Of Work class, but I tried to move them into a custom DBContext, applying the filters directly into the DBSet, but nothing changes:
It seems that EF is accessing directly the database to build the navigation properties, thus not using the other repositories or the other DBSet, avoiding the prefilter.
What can we do?
I see that NHibernate has Global Filters that could accomplish my task, so I'm evaluating a migration from EF to NH.
I see that many other people is asking for .Include() filters, thus disabling lazy loading.
Thank you.
I can provide some piece of code if needed, but I hope I explained my problem correctly.
Thank you i.a.
Best Regards,
Marco
I saw a solution with mapping to views and stored procedures, but I'm not sure how hard it was in development and maintanace. In short, it is possible to map EF model to views, where data will be filtered; in this solution each user have own database credentials.

On observing an execution tree of interdependent models in MVC

I've developed on the Yii Framework for a while now (4 months), and so far I have encountered some issues with MVC that I want to share with experienced developers out there. I'll present these issues by listing their levels of complexity.
[Level 1] CR(create update) form. First off, we have a lot of forms. Each form itself is a model, so each has some validation rules, some attributes, and some operations to perform on the attributes. In a lot of cases, each of these forms does both updating and creating records in the db using a single active record object.
-> So at this level of complexity, a form has to
when opened,
be able to display the db-friendly data from the db in a human-friendly way
be able to display all the form fields with the attributes of the active record object. Adding, removing, altering columns from the db table has to affect the display of the form.
when saves, be able to format the human-friendly data to db-friendly data before getting the data
when validates, be able to perform basic validations enforced by the active record object, it also has to perform other validations to fulfill some business rules.
when validating fails, be able to roll back changes made to the attribute as well as changes made to the db, and present the user with their originally entered data.
[Level 2] Extended CR form. A form that can perform creation/update of records from different tables at once. Not just that, whether a form would create/update of one of its records can sometimes depend on other conditions (more business rules), so a form can sometimes update records at table A,B but not D, and sometimes update records at A,D but not B
-> So at this level of complexity, we see a form has to:
be able to satisfy [Level 1]
be able to conditionally create/update of certain records, conditionally create/update of certain columns of certain records.
[Level 3] The Tree of Models. The role of a form in an application is, in many ways, a port that let user's interact with your application. To satisfy requests, this port will interact with many other objects which, in turn, interact with many more objects. Some of these objects can be seen as models. Active Record is a model, but a Mailer can also be a model, so is a RobotArm. These models use one another to satisfy a user's request. Each model can perform their own operation and the whole tree has to be able to roll back any changes made in the case of error/failure.
Has anyone out there come across or been able to solve these problems?
I've come up with many stuffs like encapsulating model attributes in ModelAttribute objects to tackle their existence throughout tiers of client, server, and db.
I've also thought we should give the tree of models an Observer to observe and notify the observed models to rollback changes when errors occur. But what if multiple observers can exist, what if a node use its parent's observer but give its children another observers.
Engineers, developers, Rails, Yii, Zend, ASP, JavaEE, any MVC guys, please join this discussion for the sake of science.
--Update to teresko's response:---
#teresko I actually intended to incorporate the services into the execution inside a unit of work and have the Unit of work not worry about new/updated/deleted. Each object inside the unit of work will be responsible for its state and be required to implement their own commit() and rollback(). Once an error occur, the unit of work will rollback all changes from the newest registered object to the oldest registered object, since we're not only dealing with database, we can have mailers, publishers, etc. If otherwise, the tree executes successfully, we call commit() from the oldest registered object to the newest registered object. This way the mailer can save the mail and send it on commit.
Using data mapper is a great idea, but We still have to make sure columns in the db matches data mapper and domain object. Moreover, an extended CR form or a model that has its attributes depending on other models has to match their attributes in terms of validation and datatype. So maybe an attribute can be an object and shipped from model to model? An attribute can also tell if it's been modified, what validation should be performed on it, and how it can be human-friendly, application-friendly, and db-friendly. Any update to the db schema will affect this attribute, and, thereby throwing exceptions that requires developers to make changes to the system to satisfy this change.
The cause
The root of your problem is misuse of active record pattern. AR is meant for simple domain entities with only basic CRUD operations. When you start adding large amount of validation logic and relations between multiple tables, the pattern starts to break apart.
Active record, at its best, is a minor SRP violation, for the sake of simplicity. When you start piling on responsibilities, you start to incur severe penalties.
Solution(s)
Level 1:
The best option is the separate the business and storage logic. Most often it is done by using domain object and data mappers:
Domain objects (in other materials also known as business object or domain model objects) deal with validation and specific business rules and are completely unaware of, how (or even "if") data in them was stored and retrieved. They also let you have object that are not directly bound to a storage structures (like DB tables).
For example: you might have a LiveReport domain object, which represents current sales data. But it might have no specific table in DB. Instead it can be serviced by several mappers, that pool data from Memcache, SQL database and some external SOAP. And the LiveReport instance's logic is completely unrelated to storage.
Data mappers know where to put the information from domain objects, but they do not any validation or data integrity checks. Thought they can be able to handle exceptions that cone from low level storage abstractions, like violation of UNIQUE constraint.
Data mappers can also perform transaction, but, if a single transaction needs to be performed for multiple domain object, you should be looking to add Unit of Work (more about it lower).
In more advanced/complicated cases data mappers can interact and utilize DAOs and query builders. But this more for situation, when you aim to create an ORM-like functionality.
Each domain object can have multiple mappers, but each mapper should work only with specific class of domain objects (or a subclass of one, if your code adheres to LSP). You also should recognize that domain object and a collection of domain object are two separate things and should have separate mappers.
Also, each domain object can contain other domain objects, just like each data mapper can contain other mappers. But in case of mappers it is much more a matter of preference (I dislike it vehemently).
Another improvement, that could alleviate your current mess, would be to prevent application logic from leaking in the presentation layer (most often - controller). Instead you would largely benefit from using services, that contain the interaction between mappers and domain objects, thus creating a public-ish API for your model layer.
Basically, services you encapsulate complete segments of your model, that can (in real world - with minor effort and adjustments) be reused in different applications. For example: Recognition, Mailer or DocumentLibrary would all services.
Also, I think I should not, that not all services have to contain domain object and mappers. A quite good example would be the previously mentioned Mailer, which could be used either directly by controller, or (what's more likely) by another service.
Level 2:
If you stop using the active record pattern, this become quite simple problem: you need to make sure, that you save only data from those domain objects, which have actually changed since last save.
As I see it, there are two way to approach this:
Quick'n'Dirty
If something changed, just update it all ...
The way, that I prefer is to introduce a checksum variable in the domain object, which holds a hash from all the domain object's variables (of course, with the exception of checksum it self).
Each time the mapper is asked to save a domain object, it calls a method isDirty() on this domain object, which checks, if data has changed. Then mapper can act accordingly. This also, with some adjustments, can be used for object graphs (if they are not to extensive, in which case you might need to refactor anyway).
Also, if your domain object actually gets mapped to several tables (or even different forms of storage), it might be reasonable to have several checksums, for each set of variables. Since mapper are already written for specific classes of domain object, it would not strengthen the existing coupling.
For PHP you will find some code examples in this ansewer.
Note: if your implementation is using DAOs to isolate domain objects from data mappers, then the logic of checksum based verification, would be moved to the DAO.
Unit of Work
This is the "industry standard" for your problem and there is a whole chapter (11th) dealing with it in PoEAA book.
The basic idea is this, you create an instance, that acts like controller (in classical, not in MVC sense of the word) between you domain objects and data mappers.
Each time you alter or remove a domain object, you inform the Unit of Work about it. Each time you load data in a domain object, you ask Unit of Work to perform that task.
There are two ways to tell Unit of Work about the changes:
caller registration: object that performs the change also informs the Unit of Work
object registration: the changed object (usually from setter) informs the Unit of Work, that it was altered
When all the interaction with domain object has been completed, you call commit() method on the Unit of Work. It then finds the necessary mappers and store stores all the altered domain objects.
Level 3:
At this stage of complexity the only viable implementation is to use Unit of Work. It also would be responsible for initiating and committing the SQL transactions (if you are using SQL database), with the appropriate rollback clauses.
P.S.
Read the "Patterns of Enterprise Application Architecture" book. It's what you desperately need. It also would correct the misconception about MVC and MVC-inspired design patters, that you have acquired by using Rails-like frameworks.

Need some advice concerning MVVM + Lightweight objects + EF

We develop the back office application with quite large Db.
It's not reasonable to load everything from DB to memory so when model's proprties are requested we read from DB (via EF)
But many of our UIs are just simple lists of entities with some (!) properties presented to the user.
For example, we just want to show Id, Title and Name.
And later when user select the item and want to perform some actions the whole object is needed. Now we have list of items stored in memory.
Some properties contain large textst, images or other data.
EF works with entities and reading a bunch of large objects degrades performance notably.
As far as I understand, the problem can be solved by creating lightweight entities and using them in appropriate context.
First.
I'm afraid that each view will make us create new LightweightEntity and we eventually will end with bloated object context.
Second. As the Model wraps EF we need to provide methods for various entities.
Third. ViewModels communicate and pass entities to each other.
So I'm stuck with all these considerations and need good architectural design advice.
Any ideas?
For images an large textst you may consider table splitting, which is commonly used to split a table in a lightweight entity and a "heavy" entity.
But I think what you call lightweight "entities" are data transfer objects (DTO's). These are not supplied by the context (so it won't get bloated) but by projection from entities, which is done in a repository or service.
For projection you can use AutoMapper, especially its newer feature that I describe here. This allows you to reduce the number of methods you need to provide "for various entities" (DTO's), because the type to project to can be given in a generic type parameter.