Zend Framework - How to manage subscription plan privileges - rest

I'm developing a REST API using Zend Framework 1.12.3.
I have to implement subscriptions for different types of plans (i.e. Basic and Premium), each plan having different privileges (e.g. Premium may offer instant, daily and weekly SMS notifications, while Basic may offer only weekly SMS notifications).
Also, there may be custom plans only for certain clients.
I've added a column in the users table called subscription, but what I cannot figure out is where to save the privileges for each subscription plan.
Should I save these privileges directly into the DB (i.e. create a table called subscriptions, and another one called subscriptions_privileges having as columns subscription_id, privilege_name and privilege_value), or would it be better if I save them into the config file?
Thanks

Note: Actually this question is not linked with Zend Framework, it is system architecture question.
Short answer:
it is much more easy to hardcode your subscription plans in your source code configuration files;
it is much more flexible to store this data in database (you can create some administration panel to allow managers to manage them, track history of plan changes, use these data in analytical SQL queries). Theoretically you can deal with all this stuff through reading and writing to your config files, but databases are just the exact tool for these tasks.
P.S.: You can add separate layer of abstartion in your application. Use model objects for your subscriptions which can be populated either from database or from your hard-coded config files using different adapters.

Related

What is the best approach to save ecommerce data on blockchain?

I'm setting up a blockchain database using bigchaindb for an ecommerce platform. Although, it's more like a secure backup. My application already runs on a SQL database. The blockchain database saves data in the form of assets and transactions in mongodb. The bigchaindb also provides all it's data via a public API. Later, I also want to query this database.
I tried searching for it, but didn't get a dedicated discussion on database design for e-commerce on blockchain. If you know any such article out there, let me know, it'll be helpful.
As per my personal assertions:
Every information like, the user_profile, order, products, reviews etc can be saved in the form of assets. Moreover, operations like transferring product from the seller to the customer can be saved as transactions. Also, a customer creates a review as an asset, while putting the review on the product will be a transaction.
Of course, I will need to create Key-Pairs as identities for individual users, but I think I shouldn't save it in the blockchain, as it's data is accessible by the public API. So, I can save it in the actual SQL database of the application.
Do you think it's the best way? Any suggestions from your side?
I am not sure you can have a "schema" in a / any blockchain. For your purpose, which I believe is little strange https://github.com/ssbc/ssb-db should be enough.

Should visualization tools like tableau or looker be used for multi-tenant systems?

Visualization tools like tableau, looker, apache superset are not supposed to be used for multi tenant products.
For example. A product with 1000's of users would like analytics on their data. This needs to be secure so company A cannot see other company B visualizations. For this to work these tools need to understand if a user has privileges to view the data. This is usually achieved through cookies after the user has logged in
To ensure data is only accessed by authorized users these third party tools should not be used. Instead sticking to Ruby on Rails with d3js, highcharts etc is the best options. The data can be managed a lot easier through the same authentication methods as you login and so the data is secure.
Actually, Looker handles multi-tenant data situation just fine. It is quite a common use case for Looker.
You can bind attributes to users that will force the right SQL to be written to guarantee that the user only sees appropriate data.
https://docs.looker.com/reference/explore-params/access_filter
We've got lots of customers building extranets for their businesses this way.
Disclosure: I work at looker.
The complexity of multi-tenant deployments goes far beyond the setup of some filter:
Data privacy - you are one typo away from a data privacy breach with the filters. You should use the database security and privacy capabilities to isolate your tenants.
Performance - you need to scale the underlying database to handle the load of concurrent users.
Customization - your tenants might need to load and analyze their own custom data. They need custom reports, etc.
Take a look at gooddata.com and their workspaces.
Disclosure: I work at GoodData

How to store large user-specific data

So I'm in the middle of planning a little web app that will require quite large amounts of data stored on a user level, in one case, the system would take a large object from a system level and make a "user specific" version, a user can have multiple ones of these. Simplest would be to compare it to a form stored in a google spreadsheet, where the user is expected to use the template spreadsheet, then change not only the answers but also the question.
Security wise I am quite OK
In the second case there is requirement to store multiple objects, size about 250k to maybe 3mb, once again on a user specific level, with a potential to move it to a system level so additional users can access it. As an example, say the user can upload pictures, but may not want to share all of them. However, a user may choose to "publish" a small number of them because they are happy with those specific pictures.
What design patterns should I consider using specifically around web apps where the user have decent amounts of data? For example, would it make most sense to use a single large database and have a table that keeps track of resources or create separate tables per user?
I have considered putting it all in a mongo database.
Your approach may be wrong.
If you want to store user based binary data and make it accessible for the user itself or the community, you would need a hierarchic structure like so:
userid1
pic1,pic2,pic3
userid2
pic4,pic5,pic6
community
pic7,pic8
You could then grant read permissions to "community" for all users, and permission for each user to its own directory.
Usually there is nothing wrong using a database to store binary files if you consider partitioning, role permissions and an applicable interface to access the data.
My suggestion is to use a binary repository like Artifactory.
It provides hierarchic structures, simple search queries using HTTP requests and has caching abilities for frequently queried objects.
I also think that http requests are a lot easier to use and also there is an abstraction layer to the data which is more secure.
Artifactory is free.

SaaS Architecture Question from Newbie

I have developed a number of departmental client-server applications, and am now ready to begin working on moving one of these applications to a SaaS model. I have done some basic web development, but I'm a newbie when it comes to SaaS architectures.
One of the first questions that comes to mind as I try to design the architecture is the question of single vs. multi tenancy. The pros and cons of each vary significantly depending on the type of application and scale required, so I'd like to describe my application and scale needs below, and hope others can comment on how I should get started with the architecture.
The client-server application currently consists of a Firebird database and a Windows application. The database contains about 20 tables containing a few thousand records in 4 primary tables, and a few hundred records in various lookup and related tables. Although the number of records is small, the size can get large, as the database can contain large BLOBS. Each customer sets up their own database and has a handful of users within the organization connected to it. When I update the db schema, a new windows application is released, and it checks the db schema and then applies the updates as needed.
For the SaaS application, I am designing for 100's (not 1000's or millions) of new customers per year. My first thought was to go with a multi tenancy model to make updates easy (shut down apply the updates to one database, and then start up). On the other hand, a single tenancy model would provide a means to roll updates out to a group of customers at a time, and spread the risk of data corruption - i.e. if something goes wrong with a database, it will impact one customer instead of all customers. With this idea, I was thinking of having a single web front-end which would connect to a single customer database upon login. Thus, when a new customer creates an account, a new database would be created (each customer would have their own db with multiple users as needed for the customer).
In this model, a db update would require either a process to go through each db to apply schema changes, or a trigger upon logging in to initiate a schema update similar to the client-server model currently in use.
Can anyone point me to information for similar applications which have been ported from client-server to SaaS? Or provide any pointers to consider? Basically I'm looking for architecture examples of taking a departmental application and making it available as a self service website for multiple customers. Thanks for any suggestions, resources, etc.
Good questions.
One thing that comes to mind is that if you have multiple databases which you roll out in a staged manner to reduce the likelihood of breaking all of your customers, you will have to address the issue of what to do if the db structure changes. You will either have to be very rigorous with respect to maintaining backward compatibility, or else deploy separate versions of your code base and somehow manage which tenants are associated with which databases.
We are providing our application using a SaaS model as well.
It was, initially a Windows app which worked similar to your multiple database proposal. Upon login, the win app would authenticate against a single "licensee" database which would then respond with connection information for a database specific to that licensee. The nice thing about this was that it provided 1) physical separation of licensee data, which our customers liked and 2) enabled us to physically locate the database on a server geographically closer to the users which both improves performance and avoids some potentially tricky legal and regulatory issues with respect to providing data across country boundaries.
Of course, since the app was a thick client app, we could get away with making database changes and pushing them out to one licensee at a time. When we were ready to upgrade, we could push out an updated thick client in conjunction with the new database - thereby ensuring that the codebase was a match with the database. As long as the common "licensee" authentication database stayed consistent, this worked fairly well.
On the other hand, though, this solution brought with it all of the problems of maintaining and managing a thick client approach which finally lead us down the thing client, browser-based approach.
In our new model, everything is in a single database. When we have updates, we push both the code and the db out at the same time. This solves the problem of keeping the code base consistent with the database structure. However, we are now confronted with the issues mentioned in #s 1 and 2, above, which we have yet to resolve.
I hope this provides some food for thought for you.
I, too, am interested in this question.
Thanks for the post.
-S

Using Postgresql as middle layer. Need opinion

I need some opinions.
I'm going to develop a POS and inventory software for a friend. This is a one man small scale project so I want to make the architecture as simple as possible.
I'm using Winform to develop the GUI (web interface doesn't make sense for POS software). For the database, I am using Postgresql.
The program will control access based on user roles, so either I have to develop a middle tier, using a web server, to control user access or I can just set user priveleges directly in Postgresql.
Developing a middle tier will be time consuming, and the maintenance will be more complex. So I prefer to set access control directly in the database.
Now it appears that using database to control user access is troublesome. I have to set priveleges for each role. Not to mention that for some tables, the priveleges are at column level. This makes reasoning about the security very hard.
So what I'm doing now is to set all the tables to be inaccessible except by superusers. The program will connect to the database using public role. Because the tables are inaccessible by public, I'm going to make publicly accessible stored functions with SECURITY DEFINER (with superuser role). The only way to access the tables is by using these functions.
I'll put the user roles and passwords in a table. Because the user table itself is inaccessible by non-superuser, I'll make a login function, let's call it fn_login(username, password). fn_login will return a session key if login is successful.
To call other functions, we need to supply session key for the user, e.g.: fn_purchase_list(session_key), fn_purchase_new(session_key, purchase_id, ...).
That way, I'm treating the stored functions as APIs. Adding new user will be easier as I only need to add new rows in the user table rather than adding new Postgresql roles. I won't need to set priveleges at column level. All controls will be done programmatically.
So what do you think? Is this approach feasible and scalable? Is there a better way to do it?
Thanks!
I believe there is a better way to do it. But since you haven't discussed what type of security you need, I cannot elaborate on specifics.
Since you are developing the application code in .NET, that code needs to be trusted (unlike a web application). Therefore, why don't you simply implement your roles and permissions in the application code, rather than the database?
My concern with your stated approach is the human overhead of stored procedures. Would much rather see you write the stated functions in C#, rather than in PostgreSQL. Then, standard version control and software development techniques could apply.
If you wait until somebody has at your database to check security, I think you'll be too late. That's a client/server mentality that went out at the end of the 90s. It's part of the reason why n-tier architectures came into vogue. Client/server can't scale horizontally as well as an n-tier solution.
I'd advise that you take better advantage of the middle tier. Security should be a cross-cutting concern that's further up the stack than your persistence layer.
If the MANAGEMENT of the database security is the issue, then you should add the task of automating that management. That means that you can store higher level data with the database tables, and then your application can convert that data in to the appropriate details and artifacts that the database requires.
It sounds like the database has the detail that you need, you just need to facilitate the management of that detail, and roll that in to your app.
My honest advice: Do not invent POS and inventory software. Take one of existing projects and make it better.