I need your suggestion for the following stuff of Multitenancy:
Actually I need to achieve multitenancy for my app. I've got it for using traditional flow with use of DB/Schema (eg. separate schema for each tenant).
Now I need to integrate user validation from LDAP and multitenancy stuff as well.
So What I am thinking is that If I store User info + DB/Schema info (DB connectivity info) in LDAP Server for more dynamic nature of the app. Because with this I would be able to connect any DB/Schema (irrespective of their physical location).
What's your opinion for this approach, Would it be really feasible?
If there is any cons in your mind, please share.
Thanks & Regards.
It sounds like you are trying to host multiple clients' systems on your system and each client may have multiple connections from multiple locations. From your question it sounds like you are putting one customer per database though databases may not be on the same cluster.
The first thing to do is to lock down PostgreSQL appropriately in the pg_hba.conf and expose only those database/user combos you want to expose. If you are going this route, LDAP sounds sane to me for publishing the connection info, since it means you can control who can access which accounts. Another option, possibly closely tied, would be to issue SSL certs and use certificate authentication by clients, so they can be issued a cert and use it to connect. You could even authenticate PostgreSQL against LDAP.
So there are a lot of options here for using these two together once you have things otherwise properly secured. Best of luck.
Related
We currently have a fairly complex Mongo environment with multiple query routers and data servers in different AWS regions using sharding and replication so that data will be initially written to a master shard in a local data server and then replicated to all regions.
When we first set this up we didn't add any security to the Mongo infrastructure and are using unauthenticated access for read and write. We now need to enable authentication so that the platform components that are writing data can use a single identity for write and read, and our system administrators can use their own user accounts for admin functionality.
The question is whether and how we can switch to using authentication without taking any downtime in the backend. We can change connection strings on the fly in the components that read and write to the DB, and can roll components in and out of load-balancers if we do need a restart. The concern is on the Mongo side.
Can we enable authentication without having to restart?
Can we continue to allow open access from an anonymous user after enabling authentication (to allow backward compatibility while we update the connection strings)?
If not, can we change the query strings before we enable authentication and have Mongo accept the connection requests even though it isn't authenticating?
Can we add authorization to our DBs and Collections after the fact?
Will there be any risk to replication as we go through this process? We have a couple of TB of data and if things get out of sync it's very difficult to force a resync.
I'm sure I'm missing some things, so any thoughts here will be much appreciated.
Thanks,
Ian
I have an app that connects to different databases on a mongodb instance. The different databases are for different clients. I want to know if my clients data will be compromised if I used a single user to login to the different databases. Also, is it a must for this user to be root? to readWrite role will do the trick. I'll be co connecting to the databases through a java backend.
There is no straightforward answer to this. It's about risk and cost-benefit.
If you use the same database user to connect to any database, then client data separation depends much more on business logic in your application. If any part of your code can just decide to connect to any client database, then a request from one client may (and according to my experience, eventually will) end up in a different client's database. Some factors make this more likely to happen, like for example if many people develop your app for a longer time, somebody will make a mistake.
A more secure option would be to have a central piece or component that is very rarely changed with changes strictly monitored, which for each client session (or even request) would take the credentials accroding to the client and use that to connect to the database. This way, any future mistake by a developer would be limited in scope, they would not be able to use the wrong database for example. And then we haven't mentioned non-deliberate application flaws, which would allow an attacker to do the same, and which are much more likely. If you have strong enforcement and separation in place, an malicious user from one client may not be able to access other clients data even in case of some application vulnerabilities, because the connection would be limited to the right database. (Note that even in this case, your application needs to have access to all client database credentials, so a full breach of your application or server would still mean all client data lost to the attacker. But not every successful attack ends in total compromise.)
Whether you do this or not should depend on risks. One question for you to answer is how much it would cost you if a cross-client data breach happened. If it's not a big deal, probably separation in business logic is ok. If it means going out of business, it is definitely not enough.
As for the user used for the connection should be root - no, definitely not. Following the principle of least privilege, you should use a user that only has rights to the things it needs to, ie. connecting to that database and nothing else.
I was wondering: What possibilities are there to connect to a postgres database?
I know off top of my head that there are at least two possibilities.
The first possibility is a brute one: Open a port and let users anonymously make changes.
The second way is to create a website that communicates with postgres with use of SQL commands.
I couldn't find any more options on the internet so I was wondering if there are any. I'm curious if other options exist. Because maybe one of those options is the best solution to communicate with postgres via the internet.
This is more of a networking/security type question, I think.
You can have your database fully exposed to the internet which is generally a bad idea unless you are just screwing around for fun and don't mind it being completely hosed at some point. I assume this is what you mean by option 1 of your question.
You can have a firewall in front that only exposes it for certain incoming IP's. This is a little better, but still feels a little exposed for a database, especially if there is sensitive data on it.
If you have a limited number of folks that need to interact with the DB, you can have it completely firewalled, but allow SSH connections to the internal network (possibly the same server) and then port forward through the ssh tunnel. This is generally the best way if you need to give full DB access to folks that are external to the DB's network since SSH can be made much more secure than a direct DB connection by using a public/private keypair for each incoming connection. You can also only allow SSH from specific IP's through your firewall as an added level of security.
Similar to SSH, you could stand up a VPN and allow access to the LAN upon which the DB sits and control access through the VPN.
If you have a wider audience, you can allow no external access to the database (except for you or a DBA/Administrator type person through SSH tunneling or VPN). Then build access through a website where communication to the DB is done on the server side scripting (php, node.js, rails, .net, what-have-you). This is the usual website set up that every site with a database behind it uses. I assume that's what you mean in your option 2 of your question.
I want to figure out how many users can ADFS 2.0 stand-alone server support. I mean load of the server. My customers said that it supports just 100 users (seems strange for a server and so simple operations) and they have 700 users at the same time.
So he recomended to have a federation farm instead of stand-alone server. But I prefere to check first.
So, can you share info about load limitations of stand-alon SSO server VS server farms?
Any docs, articles with numbers, experts ideas or so on...
We have standalone servers that support WAY more than 100 users - easily over 1000.
Not sure what the upper limit is?
A farm is only going to help if you have a load balancer in front of them
I also found an article about this problem.
The auther writes that they use 2 ADFS servers for 10 000 users.
And there is a calculator to get number of servers depending on the load and users
There is no such restriction and it solely depends on how often users login to your system.
We have few deployments of the adfs, one of them supports like 50000 users and only TWO servers are enough. I even suspect one would do however this is, as always, not a good idea to have just one server (at least two servers = failover, you wouldn't want the whole environment to be inaccessible just because your login server just died).
The idea would be then to start with two servers and monitor the infrastructure. Add other instances only when necessary.
I don't know how else to say it so I'm just going to explain my ideal scenario and hopefully you can explain to me how to implement it...
I'm creating an application with the Zend Framework that will be hosted with DreamHost. The application will be hosted on its own domain (i.e. example-app.com). Basically, a user should be able to sign up, get their own domain sampleuser.example-app.com or example-app.com/sampleuser which points to, what looks like their own instance of the app, which is really a single instance serving up different content based on the url.
Eventually, I want my users to be able to create their own domain (like foobar.com) that points to sampleuser.example-app.com, such that visitors to foobar.com don't notice that the site is really being served up from example-app.com.
I don't know how to do most of that stuff. How does this process work? Do I need to do some funky stuff with Apache or can this be done with a third party host, like DreamHost?
Update: Thanks for the advice! I've decided to bite the bullet and upgrade my hosting plan to utilize wildcard subdomains. It's cheaper than I was expecting! I also found out about domain reseller programs, like opensrs.com, that have their own API. I think using one of these APIs will be the solution to my domain registration issue.
Subdomains are easy. In hosting environements, in most cases, apache is configured to catch all subdomain calls below the main domain. You just need to have a wildcard DNS defined, so *.example-app.com are pointed to IP of your server. Then your website should catch all calls to those subdomain names.
Other domains are hard. They need to be configured as virtual hosts in Apache - see http://httpd.apache.org/docs/1.3/vhosts/name-based.html - that means it will be difficult to automate that, especially in hosting environement - unless your host gives you some API to do just that (easy and more feasible scenario would be to have a distinctive IP assigned to your website, then you can catch all with your Apache - it's probably possible to configure using your hosting control panel or works out of the box - and then just point DNS servers to your IP).
Then, after you have configured your Apache to point all necessary calls to your website, you can differnetiate application partitions per subdomain in this way:
get the host header from HTTP request
have a database table containing all subdomain names you're serving
make a lookup to that database table to determine instance, or user, id and use it later for filtering data / or selecting a database, if you'll go with a "database per application instance" schema.
Good luck :)