Using Postgresql as middle layer. Need opinion - postgresql

I need some opinions.
I'm going to develop a POS and inventory software for a friend. This is a one man small scale project so I want to make the architecture as simple as possible.
I'm using Winform to develop the GUI (web interface doesn't make sense for POS software). For the database, I am using Postgresql.
The program will control access based on user roles, so either I have to develop a middle tier, using a web server, to control user access or I can just set user priveleges directly in Postgresql.
Developing a middle tier will be time consuming, and the maintenance will be more complex. So I prefer to set access control directly in the database.
Now it appears that using database to control user access is troublesome. I have to set priveleges for each role. Not to mention that for some tables, the priveleges are at column level. This makes reasoning about the security very hard.
So what I'm doing now is to set all the tables to be inaccessible except by superusers. The program will connect to the database using public role. Because the tables are inaccessible by public, I'm going to make publicly accessible stored functions with SECURITY DEFINER (with superuser role). The only way to access the tables is by using these functions.
I'll put the user roles and passwords in a table. Because the user table itself is inaccessible by non-superuser, I'll make a login function, let's call it fn_login(username, password). fn_login will return a session key if login is successful.
To call other functions, we need to supply session key for the user, e.g.: fn_purchase_list(session_key), fn_purchase_new(session_key, purchase_id, ...).
That way, I'm treating the stored functions as APIs. Adding new user will be easier as I only need to add new rows in the user table rather than adding new Postgresql roles. I won't need to set priveleges at column level. All controls will be done programmatically.
So what do you think? Is this approach feasible and scalable? Is there a better way to do it?
Thanks!

I believe there is a better way to do it. But since you haven't discussed what type of security you need, I cannot elaborate on specifics.
Since you are developing the application code in .NET, that code needs to be trusted (unlike a web application). Therefore, why don't you simply implement your roles and permissions in the application code, rather than the database?
My concern with your stated approach is the human overhead of stored procedures. Would much rather see you write the stated functions in C#, rather than in PostgreSQL. Then, standard version control and software development techniques could apply.

If you wait until somebody has at your database to check security, I think you'll be too late. That's a client/server mentality that went out at the end of the 90s. It's part of the reason why n-tier architectures came into vogue. Client/server can't scale horizontally as well as an n-tier solution.
I'd advise that you take better advantage of the middle tier. Security should be a cross-cutting concern that's further up the stack than your persistence layer.

If the MANAGEMENT of the database security is the issue, then you should add the task of automating that management. That means that you can store higher level data with the database tables, and then your application can convert that data in to the appropriate details and artifacts that the database requires.
It sounds like the database has the detail that you need, you just need to facilitate the management of that detail, and roll that in to your app.

My honest advice: Do not invent POS and inventory software. Take one of existing projects and make it better.

Related

Should visualization tools like tableau or looker be used for multi-tenant systems?

Visualization tools like tableau, looker, apache superset are not supposed to be used for multi tenant products.
For example. A product with 1000's of users would like analytics on their data. This needs to be secure so company A cannot see other company B visualizations. For this to work these tools need to understand if a user has privileges to view the data. This is usually achieved through cookies after the user has logged in
To ensure data is only accessed by authorized users these third party tools should not be used. Instead sticking to Ruby on Rails with d3js, highcharts etc is the best options. The data can be managed a lot easier through the same authentication methods as you login and so the data is secure.
Actually, Looker handles multi-tenant data situation just fine. It is quite a common use case for Looker.
You can bind attributes to users that will force the right SQL to be written to guarantee that the user only sees appropriate data.
https://docs.looker.com/reference/explore-params/access_filter
We've got lots of customers building extranets for their businesses this way.
Disclosure: I work at looker.
The complexity of multi-tenant deployments goes far beyond the setup of some filter:
Data privacy - you are one typo away from a data privacy breach with the filters. You should use the database security and privacy capabilities to isolate your tenants.
Performance - you need to scale the underlying database to handle the load of concurrent users.
Customization - your tenants might need to load and analyze their own custom data. They need custom reports, etc.
Take a look at gooddata.com and their workspaces.
Disclosure: I work at GoodData

How secure is this security method in postgresql?

For example I have 2 databases. One of them is called ecommerce which contains real customer information. Another is called ec1 which basically contains only views from tables of ecommerce.
We use our ec1 database to connect to our website or apps. How secure is this method in terms of back end security?
Only exposing ec1 is better than exposing ecommerce because you can reset ec1 using your "safe" values in case of corruption and you can keep some secret data only stored in ecommerce if it doesn't need to be used by your website or your app.
However, this is only a small portion of backend security. Having two different databases with real data and data views doesn't matter a lot if someone can access your server OR can corrupt your data.
I mean, if someone found a way to get some data he should be not authorized to read, it is bad even if it comes from ec1 and not from ecommerce
So yeah, exposing only views is a BETTER solution, but nothing can be said on the overall security because it mainly doesn't depend on that
EDIT: A detailed explaination of backend security is way beyond the possibility of a simple stackoverflow answer (and probably i am not the best teacher) but for basic server security you must take care of:
- Firewall to stop every request but your webapps ones.
- Updated software
- good database passwords
- The user you use for your application queries must only be able to perform operations on ecl1 database, while the views should be generated with a cron and using a different user
These are the main security enhancement tips that comes to my mind

Orchard multi tenancy without table/database proliferation

I'm looking at implemented a muli-tenant portal solution for my SaaS application using Orchard CMS. I'm pleased that it appears multi-tenancy is a first class feature, but it looks like in order to achieve it, I've got to either a) Create a set of tables for each tenant with a table prefix or b) Have separate databases for each tenant.
I'm trying to build a solution for 10,000+ customers, and so anything that requires me to make physical data schema changes per tenant won't scale. In our SaaS application, we use a tenantID column on all tables, plus the use of nHibernate filters and a heck of a lot of indexes to allow us to scale.
I'd like to do the same in Orchard. So instead of a table for each tenant, I'd like ONE set of tables with a tenantID, and then use filters in the data access layer (NHib) to always pull the right data.
Questions:
1) Is this possible?
2) Has anyone done this?
3) Any thoughts on the best way? I was going to modify the MultiTenancy/NHiberate module source directly.
It is possible, but quite hard to do.
It's also most likely not a scenario for Orchard multi-tenancy, but without any further details I cannot be sure.
This feature fits best in cases where you need to have a totally independent applications and (almost) nothing is supposed to be shared between them - like in shared hosting, for instance. The major drawback is the memory overhead, because each tenant has its own copy of the whole internal object infrastructure.
A much easier approach, instead of trying to put a square peg in a round hole tweaking multi-tenancy, would be to use single tenant and implement your desired multi-tenancy scheme in a separate module on your own, from scratch. You could eg. have a "Tenant" content type and build your module around it.

Reading data from Oracle DB in iPhone

I would like to read/write data from my Oracle DB into my iPhone code.
Can you suggest some methods for the same?
One possible solution is provide your iOS App a REST Api and implement methods to read/update/delete your model entities.
If you could access a database directly from your iOS App, for every change in your model you had to deploy a new version on your iOS App. Providing a REST Api you can made changes in your model and do not change parameters or response on your services.
Don't.
Database connections generally expect to be reliable. Connections from an iPhone aren't.
Also, any DB administrator would tell you that the first step to ensuring database security is to lock down the number of places from which the database can be directly accessed. This is why you never (or should never) see client devices talking directly to a database.
Instead, implement an intermediary (such as a web service) that accepts, e.g., HTTPS connections from the iPhone in the usual manner (NSURLConnection, etc.) and does the actual database heavy lifting itself. I'm not an Oracle expert, but I would assume that they have some products that help you do this with relatively little effort given how common a task it is. If not, it should be fairly straightforward for you to implement your own in Java, Python, or a language of your choosing.

Strategies for "Always-Connected" Windows Client Data Architecture

Let me start by saying: this is my 1st post here, this is a bit lenghty, and I havent done Windows Forms development in years....with that in mind please excuse me if this isn't directly a programming question and please bear with me as I really need the help!!
I have been asked to develop a Windows Forms app for our company that talks to a central (local area network) Linux Server hosting a PostgreSQL database. The app is to allow users to authenticate themselves into the system and thereafter conduct the usual transactions with the PG database. Ordinarily, I would propose writing a webforms app against Mono, but the clients need to utilise local resources such as USB peripheral devices, so that is out of the question. While it might not seem clear, my questions are italised below:
Dilemma #1:
The application is meant to be always connected. How should I structure my DAL/BLL - Should this reside on the server or with the client?
Dilemma #2:
I have been reading up on Client Application Services (CAS), and it seems like a great fit for authentication, as everything is exposed via URIs. I know that a .NET Data Provider exists for PostgreSQL, but not too sure if CAS will all work on a Linux (Debian) server? Believe me, I would get my hands dirty and try myself, but I need to come up with a logical design first before resources are allocated to me for "trial purposes"!
Dilemma #3:
If the DAL/BLL is to reside on the server, is there any way I can create data services, and expose only these services to authenticated clients. There is a (security) requirement whereby a connection string with username and password to the database cannot be present on any client machines...even if security on the database side is quite rigid. I'm guessing that the only way for this to work would be to create the various CRUD data service methods that are exposed by an ASP.NET app, and have the WindowsForms make a request for data or persist data to the ASP.NET app (thru a URI) and have that return a resultset or value. Would I be correct in assuming this? Should I be looking into WCF Data Services? and will WCF work with a non-SQL Server database?
Thank you for taking the time out to read this, but know that I am desperately seeking any advice on this! THANKS A MILLION!!!!
EDIT:
I am considering also using NHibernate as my ORM
Some parts of your questions are complicated and beyond my expertise. However, in general you can do almost anything you put effort into, CAP theorem and the like aside.
DAL/BLL stuff in general can reside in any of the tiers. I put a lot of this in my database and some in the middle tier, however this is to allow re-use in different environments which may or may not be a goal for you. The thing is I would think through carefully the separation of concerns issues here and what sorts of centralization of logic you want to place. The further back, the more re-usable this becomes but this is not always a free tradeoff.
I am not entirely familiar with CAS but it looked like AJAX kinds of stuff from what I saw on the MSDN web site. That could be wrong, but if it is right, then you have an issue in that such requests may be stateless and this could be an issue if you need a constant connection.
On the whole based on what you are saying it sounds cleanest to do a two tier rather than a three tier app, and have the DAL/BLL sit on the client, possibly supported by stored procedures in the server. You can then set PostgreSQL up to authenticate against whatever you use on your network (KRB5 if AD is what I would recommend). This simplifies your data access, and it allows you to control permissions based on the authentication against the database. Since you can authenticate users based on AD, you can then set permissions accordingly.
One important consideration is going to be number of connections. PostgreSQL does have some places where every current connection must be checked and iterated through, and connection startup and tear-down overhead in some cases can be significant. So one important decision will involve connection pooling. Whether or not you use connection pooling to boost performance will depend on what you are doing but I have seen cases where PostgreSQL has handled 600 connections without serious problems.