I'm using Prisma ORM with GraphQL.
I've got a user type, and for obvious reasons, I don't want the password field able to be queried. Is there any way to do this, either in Prisma, or GraphQL (or PostgreSQL)?
Currently you can use select to fetch the fields that you require and omit the ones that need to be private.
There's a request here for the same that will allow to exclude specific fields.
Related
I want to implement a multi-tenant solution where I have one webserver and one database shared across all tenants. Regarding to this blog post from AWS it is "pooled multi tenancy model".
I'm using nest.js and sequelize. If sequelize is not a good fit for this I also could switch to another library like typeORM if necessary.
How can this be implemented? I'm absolutely clueless how I can use a different connection (different database user) for each HTTP request and also I don't know how to set a runtime context variable for the connection in a good way.
What I get currently is that every HTTP requests contains a header tenant-id. This should be used for all queries.
There is also the concept of scopes in sequelize. But this is something that is implemented on the client side and not on the database directly. Also, this is something that is specific to sequelize. I would prefer a solution that is independent from sequelize and maybe more specific to PostgreSQL.
Is there any way to implement this with sequelize? A hint or a basic approach would be sufficient.
That seems that this approach is similar. https://learn.microsoft.com/en-us/microsoft-365/education/deploy/design-multi-tenant-architecture.
I'm studding for create a similar architecture, but i will use the "silo" model or "physical database". I think that at first you need to create a internal database called "catalog" that will contains the information of the user (this user already have a login? if true select this information) where have to contains a previous credentials how tenant-id. About the Sequelize, i guess that is necessary to use RAW queries for create ROLE|GRANT|DATA BASE etc and the MIGRATIONS to create the same DB for each new clients.
I have an application that connects to an existing database and retrieves some data from it. This app will use this database in read-only mode. Despite it is our code I would like to add 'fool-level' protection from modifying/deleting documents accidentally by other developers/myself in the future. Tried with pre hooks but it looks that there're different remove hooks, query, model, document, etc... But I couldn't achieve consistency in behavior for all types of removing queries, query, model, document, etc...
Is there any appropriate solution to this task?
Create a read-only user and connect through that user:
https://sysadmins.co.za/create-read-only-users-in-mongodb/
Consider a scenario of an application where I have users and projects and the requirement is users shall be assigned to projects. One user can be assigned to multiple projects. This is a many to many relationship. So what is the best way to model such a requirement.
I will like to discuss few approaches to model such a requirement :
- Embeded data model
In this approach I will embedd the user documents inside projects document.
Advantages : you get all the required data in one API call OR by fetching one single document.
Disadvantages : Data duplicacy which is OK
Real problem is if you update user information for eg user mobile no or name from users screen then this updated information should also be reflected under all embedded user documents. For this some bulk update query should be fired.
But is this the right way ???
- Embedding object references instead of objects (which is normalised)
In this case if we embedd user id's instead of user objects then the problem mentioned above wont be there but then we will have to make multiple network calls to get required data or make a seperate relation kond of document as we do in SQL.
Is this the best way ??
We have a same scenario, so i embed objectId. and for fill data for clients, populate users data in find function.
contract.find({}).populate('user').then(function(){});
There are few hard and fast rules, but usually with many-to-many relationships you would prefer references over embedding. This doesn't mean your data is totally flat/normalized.
For example, you could have a user document with an array of project ids. You could have the reverse for projects.
Think about your queries and how you will structure them. That can give you other hints about how to structure your documents.
In a JHipster based project, we need to selectively filter out certain columns based on role/user logged in. All users will be able to view/modify most of the columns, but only some privileged users will be able to view/modify certain secure fields/columns.
It looks like the only option to get this done is using EntityListeners. I can use an EntityListener and mask a certain column during PostLoad event. Say for example, I mask the column my_secure_column with XXX and display to the user.
User then changes some other fields/columns (that he has access to) and submits the form. Do I have to again trap the partially filled in entity in PreUpdate event, get the original value for my_secure_column from database and set it before persisting?
All this seems inefficient. Scoured several hours but couldn't find a specific implementation that best suits this use case.
Edit 1: This looks like a first step to achieving this in a slightly better way. Updating Entities with Update Query in Spring Data JPA
I could use specific partial updates like updateAsUserRole, updateAsManagerRole, etc., instead of persisting the whole entity all the time.
#Repository
public interface CompanyRepository extends JpaRepository<Company, Integer> {
#Modifying(clearAutomatically = true)
#Query("UPDATE Company c SET c.address = :address WHERE c.id = :companyId")
int updateAddress(#Param("companyId") int companyId, #Param("address") String address);
}
Column based security is not an easy problem to solve, and especially in combination with JPA.
Ideally you like to avoid even loading the columns, but since you are selecting entities this is not possible by default, so you have to remove the restricted content by overriding the value after load.
As an alternative you can create a view bean (POJO) and then use JPQL Constructor Expression. Personally I would use CriteriaBuilder. construct() instead of concatenating a JPQL query, but same principle.
With regards to updating the data, the UI should of cause not allow the editing of restricted fields. However you still have to validate on the backend, and I would recommend that you check if the column was modify before calling JPA. Typically you have the modifications in a DTO and would need to load the Entity anyway, if a restricted column was modified, you would send an error back. This way you only call JPA after the security has been checked.
We would like to create an OData REST API. Our data model is such that each customer has their own database. All database objects have the same definition across all customer databases, with the exception of a single table.
The customer specific table we will call Contact. When a customer adds a column the system creates a column with a standardised name with a definition translated from options selected by the user in the UI. The user only refers to the column data by a field name they have specified to enable the user to be able to generate friendly queries.
It seems to me that the following approaches could be used to enable OData for the model described:
1) Create an OData open type to cater for the dynamic properties. This has the disadvantage of user requests for a customer not providing an indication of the dynamic properties that can be queried against. Even though they will be known for the user (via token authentication). Also, because dynamic properties are a dictionary, some data pivoting and inefficient query writing would be required. Not sure how to implement the IQueryable handling of query options for the dynamic properties to enable our own custom field querying.
2) Create a POCO class with e.g. 50 properties; CustomField1, CustomField2... Then somehow control which fields are exposed for use in OData calls. We would then include a separate API call to expose the custom field mapping. E.g. custom field friendly name of MobileNumber = CustomField12.
3) At runtime, check to see if column definitions of table changed since last check. If have, generate class specific to customer using CodeDom and register it with OData. Aiming for a unique URL for each customer. E.g. http://domain.name/{customer guid}/odata
I think the ideal for us is option 2. However, the fact the CustomField1 could be an underlying SQL data type of nvarchar, int, decimal, datetime, etc, there are added complications.
Has anyone a working example of how to achieve what has been described, satisfactorily?
Thanks in advance for any help.
Rik
We have run into a similar situation but with our entire dataset being unknown until runtime. Using the ODataConventionModelBuilder and EdmModel classes, you can add properties dynamically to the model at runtime.
I'm not sure whether you will have to manually add all of the properties for this object type even though only some of them are unknown or whether you can add your main object and then add your dynamic ones afterwards, but I guess either would be workable.
If you can get hold of which type of user it is on the server, you could then add only the properties that you are interested in (like option 3 but not having to CodeDom).
There is an example of this kind of untyped OData server in the OData samples here that should get you started: https://github.com/OData/ODataSamples/tree/master/WebApi/v4/ODataUntypedSample
The research we carried out actually posed Option 1 as the most suitable approach for some operations. i.e. Create an SQL view that unpivots the data in a table to a key/value pair of column name/column value for each column in the table. This was suitable for queries returning small datasets. This was far less effort than Option 3 and less confusing for the user than Option 2. The unpivot query converted the field values to nvarchar (string) values and thus meant that filtering in the UI by column value data types was not simple to achieve. (If we decide to implement this ability, I believe this can be achieved by creating a custom attribute that derives from EnablQueryAttribute, marking the controller action with it and manipulate the IQueryable before execution).
However, we wanted to expose a /Contacts/Export endpoint that when called would output the columns from a table with a fixed schema joined on a table with a client specific schema and output to a CSV file. All the while utilising the OData supported filter syntax. One of our customer databases has more than 12 million rows of data and is made up of approximately 30 columns.
To achieve this it looks like our best bet would have been to work with the Microsoft.OData.Core.UriParser.UriQueryExpressionParser class, unfortunately Microsoft in their wisdom have declared this as internal, as well as many of it's dependants.
Walking an abstract syntax tree built from OData supported query options and applying our own visitor to each node to build some dynamic Linq query/SQL seems like a possible solution.
For the time-being we will simply implement a cut-down set of supported $filter criteria without the support for grouping parenthesis.