How secure is this security method in postgresql? - postgresql

For example I have 2 databases. One of them is called ecommerce which contains real customer information. Another is called ec1 which basically contains only views from tables of ecommerce.
We use our ec1 database to connect to our website or apps. How secure is this method in terms of back end security?

Only exposing ec1 is better than exposing ecommerce because you can reset ec1 using your "safe" values in case of corruption and you can keep some secret data only stored in ecommerce if it doesn't need to be used by your website or your app.
However, this is only a small portion of backend security. Having two different databases with real data and data views doesn't matter a lot if someone can access your server OR can corrupt your data.
I mean, if someone found a way to get some data he should be not authorized to read, it is bad even if it comes from ec1 and not from ecommerce
So yeah, exposing only views is a BETTER solution, but nothing can be said on the overall security because it mainly doesn't depend on that
EDIT: A detailed explaination of backend security is way beyond the possibility of a simple stackoverflow answer (and probably i am not the best teacher) but for basic server security you must take care of:
- Firewall to stop every request but your webapps ones.
- Updated software
- good database passwords
- The user you use for your application queries must only be able to perform operations on ecl1 database, while the views should be generated with a cron and using a different user
These are the main security enhancement tips that comes to my mind

Related

Keycloak users - is a good idea to differentiate users by their country?

I'm designing a fairly complex backend and now I have a doubt. Is a good idea in Keycloak to differentiate users in different keycloak groups by their country when I create them during a sign-in for example?
I was thinking that it could be useful to better manage users in the future.
What do you think?
There is no direct solution for such question. It clearly depends on your application. If in the future your application will provide services based on the country of each user it might be good idea as your application might get this information about the user directly from Keycloak.
If you are planning to do some researches about your users it also might be good idea as some statistics might be country related or you would like to get country related outputs (to relocate your cloud instances near to majority of your users etc..)
There might be faster database lookups with such additional information but I don't know if Keycloak currently provides functionality for this. On the other hand, if I will sign up to your service while I am chilling on my holidays on the other side of the world from where I usually live your record will be useless. Therefore this action could bring more issues to implementation of your application while you might not need it at all.
If you have no plans for such functionalities there is simply no reason to do such thing. Present web services tend to store more data then they actually need to. For example in majority of recent database leaks you can see LAST geological coordination's point stored with each user. While these might be unnecessary for precise advertisements targeting and unnecessary users screening, there is really no reason to store last geological coordination of each user. Such information might change with each user login and should be determined in "runtime". If services do not benefit from such data users are under threat for no reason.
You should determine what is needed by your application and what is not. You should never store or expose any additional information's about your users regardless how well your application is secured.

Making DB Query as a Microservice API

We are currently using direct DB connection to query mongodb from our scripts and retrieve the required data.
Is it advisable / best practice to make the data retrieval from DB as a microservice.
It does until it doesn't :)
A service needs to get its data from somewhere and a database is a good start. If you have high loads you may find that you need to add a cache in the middle see this post from Instagram engineering https://instagram-engineering.com/thundering-herds-promises-82191c8af57d
edit (after comment)
generally speaking, a service should own its database and other services shouldn't access another database service directly only via its API. The idea is to keep services autonomous and enable them to evolve independently.
Depending on the size of microservice, that's now always practical since it can make the overhead of having the service be more of the utility it provide (I call this nanoservices). Also, if you have a lot of services you don't want to allow each one to talk to any other (even not via the DB) since you just get a huge mess. The way I see it there should be clear logical boundaries (services or microservices) and then within each such logical services you may find that it makes sense to have more than one "parts" (which I call aspects) e.g. they have different scaling needs or different suitable technologies etc. When you set things this way aspects can access the same database and services shouldn't (and you can still tame the chaos :) )
One last thing to think about - who said API is only a REST API, you can add views on top of the data that belongs to another service and as long as you treat that like an API (security, versioning etc.) you can have other services access that as well

Strategies for "Always-Connected" Windows Client Data Architecture

Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!!
I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below:
Dilemma #1:
The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client?
Dilemma #2:
I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"!
Dilemma #3:
If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database?
Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!
EDIT:
I am considering also using NHibernate as my ORM
Some parts of your questions are complicated and beyond my expertise. However, in general you can do almost anything you put effort into, CAP theorem and the like aside.
DAL/BLL stuff in general can reside in any of the tiers. I put a lot of this in my database and some in the middle tier, however this is to allow re-use in different environments which may or may not be a goal for you. The thing is I would think through carefully the separation of concerns issues here and what sorts of centralization of logic you want to place. The further back, the more re-usable this becomes but this is not always a free tradeoff.
I am not entirely familiar with CAS but it looked like AJAX kinds of stuff from what I saw on the MSDN web site. That could be wrong, but if it is right, then you have an issue in that such requests may be stateless and this could be an issue if you need a constant connection.
On the whole based on what you are saying it sounds cleanest to do a two tier rather than a three tier app, and have the DAL/BLL sit on the client, possibly supported by stored procedures in the server. You can then set PostgreSQL up to authenticate against whatever you use on your network (KRB5 if AD is what I would recommend). This simplifies your data access, and it allows you to control permissions based on the authentication against the database. Since you can authenticate users based on AD, you can then set permissions accordingly.
One important consideration is going to be number of connections. PostgreSQL does have some places where every current connection must be checked and iterated through, and connection startup and tear-down overhead in some cases can be significant. So one important decision will involve connection pooling. Whether or not you use connection pooling to boost performance will depend on what you are doing but I have seen cases where PostgreSQL has handled 600 connections without serious problems.

SaaS Architecture Question from Newbie

I have developed a number of departmental client-server applications, and am now ready to begin working on moving one of these applications to a SaaS model. I have done some basic web development, but I'm a newbie when it comes to SaaS architectures.
One of the first questions that comes to mind as I try to design the architecture is the question of single vs. multi tenancy. The pros and cons of each vary significantly depending on the type of application and scale required, so I'd like to describe my application and scale needs below, and hope others can comment on how I should get started with the architecture.
The client-server application currently consists of a Firebird database and a Windows application. The database contains about 20 tables containing a few thousand records in 4 primary tables, and a few hundred records in various lookup and related tables. Although the number of records is small, the size can get large, as the database can contain large BLOBS. Each customer sets up their own database and has a handful of users within the organization connected to it. When I update the db schema, a new windows application is released, and it checks the db schema and then applies the updates as needed.
For the SaaS application, I am designing for 100's (not 1000's or millions) of new customers per year. My first thought was to go with a multi tenancy model to make updates easy (shut down apply the updates to one database, and then start up). On the other hand, a single tenancy model would provide a means to roll updates out to a group of customers at a time, and spread the risk of data corruption - i.e. if something goes wrong with a database, it will impact one customer instead of all customers. With this idea, I was thinking of having a single web front-end which would connect to a single customer database upon login. Thus, when a new customer creates an account, a new database would be created (each customer would have their own db with multiple users as needed for the customer).
In this model, a db update would require either a process to go through each db to apply schema changes, or a trigger upon logging in to initiate a schema update similar to the client-server model currently in use.
Can anyone point me to information for similar applications which have been ported from client-server to SaaS? Or provide any pointers to consider? Basically I'm looking for architecture examples of taking a departmental application and making it available as a self service website for multiple customers. Thanks for any suggestions, resources, etc.
Good questions.
One thing that comes to mind is that if you have multiple databases which you roll out in a staged manner to reduce the likelihood of breaking all of your customers, you will have to address the issue of what to do if the db structure changes. You will either have to be very rigorous with respect to maintaining backward compatibility, or else deploy separate versions of your code base and somehow manage which tenants are associated with which databases.
We are providing our application using a SaaS model as well.
It was, initially a Windows app which worked similar to your multiple database proposal. Upon login, the win app would authenticate against a single "licensee" database which would then respond with connection information for a database specific to that licensee. The nice thing about this was that it provided 1) physical separation of licensee data, which our customers liked and 2) enabled us to physically locate the database on a server geographically closer to the users which both improves performance and avoids some potentially tricky legal and regulatory issues with respect to providing data across country boundaries.
Of course, since the app was a thick client app, we could get away with making database changes and pushing them out to one licensee at a time. When we were ready to upgrade, we could push out an updated thick client in conjunction with the new database - thereby ensuring that the codebase was a match with the database. As long as the common "licensee" authentication database stayed consistent, this worked fairly well.
On the other hand, though, this solution brought with it all of the problems of maintaining and managing a thick client approach which finally lead us down the thing client, browser-based approach.
In our new model, everything is in a single database. When we have updates, we push both the code and the db out at the same time. This solves the problem of keeping the code base consistent with the database structure. However, we are now confronted with the issues mentioned in #s 1 and 2, above, which we have yet to resolve.
I hope this provides some food for thought for you.
I, too, am interested in this question.
Thanks for the post.
-S

Using Postgresql as middle layer. Need opinion

I need some opinions.
I'm going to develop a POS and inventory software for a friend. This is a one man small scale project so I want to make the architecture as simple as possible.
I'm using Winform to develop the GUI (web interface doesn't make sense for POS software). For the database, I am using Postgresql.
The program will control access based on user roles, so either I have to develop a middle tier, using a web server, to control user access or I can just set user priveleges directly in Postgresql.
Developing a middle tier will be time consuming, and the maintenance will be more complex. So I prefer to set access control directly in the database.
Now it appears that using database to control user access is troublesome. I have to set priveleges for each role. Not to mention that for some tables, the priveleges are at column level. This makes reasoning about the security very hard.
So what I'm doing now is to set all the tables to be inaccessible except by superusers. The program will connect to the database using public role. Because the tables are inaccessible by public, I'm going to make publicly accessible stored functions with SECURITY DEFINER (with superuser role). The only way to access the tables is by using these functions.
I'll put the user roles and passwords in a table. Because the user table itself is inaccessible by non-superuser, I'll make a login function, let's call it fn_login(username, password). fn_login will return a session key if login is successful.
To call other functions, we need to supply session key for the user, e.g.: fn_purchase_list(session_key), fn_purchase_new(session_key, purchase_id, ...).
That way, I'm treating the stored functions as APIs. Adding new user will be easier as I only need to add new rows in the user table rather than adding new Postgresql roles. I won't need to set priveleges at column level. All controls will be done programmatically.
So what do you think? Is this approach feasible and scalable? Is there a better way to do it?
Thanks!
I believe there is a better way to do it. But since you haven't discussed what type of security you need, I cannot elaborate on specifics.
Since you are developing the application code in .NET, that code needs to be trusted (unlike a web application). Therefore, why don't you simply implement your roles and permissions in the application code, rather than the database?
My concern with your stated approach is the human overhead of stored procedures. Would much rather see you write the stated functions in C#, rather than in PostgreSQL. Then, standard version control and software development techniques could apply.
If you wait until somebody has at your database to check security, I think you'll be too late. That's a client/server mentality that went out at the end of the 90s. It's part of the reason why n-tier architectures came into vogue. Client/server can't scale horizontally as well as an n-tier solution.
I'd advise that you take better advantage of the middle tier. Security should be a cross-cutting concern that's further up the stack than your persistence layer.
If the MANAGEMENT of the database security is the issue, then you should add the task of automating that management. That means that you can store higher level data with the database tables, and then your application can convert that data in to the appropriate details and artifacts that the database requires.
It sounds like the database has the detail that you need, you just need to facilitate the management of that detail, and roll that in to your app.
My honest advice: Do not invent POS and inventory software. Take one of existing projects and make it better.