I am trying to understand Kerberos and Kerberos realms. What I want to know is if you have for example a company with two offices and a headquarters all in different locations with their own local network and with the company servers located at HQ and clients in all offices need access to the servers at HQ would you have a KDC at each location (realms?) or a single KDC at the HQ?
You can have KDC in each segment of you network and then enable cross realm authentication.The details of cross realm authentication can be found at the site given below.
http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-kerberos.html.
Good luck
In a normal scenario you would not set up multiple realms. Multiple realms designate separate trust domains which may not be necessary. You may decide to have a separate KDC in each office to reduce latency and this is where physical security concerns may arise a stolen KDC means your entire user database out in the wild. Microsoft uses RODC http://technet.microsoft.com/en-us/library/cc732801(v=ws.10).aspx, to solve that problem, however as far as I know neither MIT nor Heimdal provide anything similar, in this case you may want to have remote/branch office users in a separate realm and this way if their user database is pilfered it will only be them. In that case you may want to have a at least one more KDC for the remote realms so that you may be able to quickly enumerate users and change their keys.
There is one more place where cross realm trust is used, when Windows AD and UNIX hosts need to interoperate and UNIX hosts are members of a different realm.
Related
I have an app that connects to different databases on a mongodb instance. The different databases are for different clients. I want to know if my clients data will be compromised if I used a single user to login to the different databases. Also, is it a must for this user to be root? to readWrite role will do the trick. I'll be co connecting to the databases through a java backend.
There is no straightforward answer to this. It's about risk and cost-benefit.
If you use the same database user to connect to any database, then client data separation depends much more on business logic in your application. If any part of your code can just decide to connect to any client database, then a request from one client may (and according to my experience, eventually will) end up in a different client's database. Some factors make this more likely to happen, like for example if many people develop your app for a longer time, somebody will make a mistake.
A more secure option would be to have a central piece or component that is very rarely changed with changes strictly monitored, which for each client session (or even request) would take the credentials accroding to the client and use that to connect to the database. This way, any future mistake by a developer would be limited in scope, they would not be able to use the wrong database for example. And then we haven't mentioned non-deliberate application flaws, which would allow an attacker to do the same, and which are much more likely. If you have strong enforcement and separation in place, an malicious user from one client may not be able to access other clients data even in case of some application vulnerabilities, because the connection would be limited to the right database. (Note that even in this case, your application needs to have access to all client database credentials, so a full breach of your application or server would still mean all client data lost to the attacker. But not every successful attack ends in total compromise.)
Whether you do this or not should depend on risks. One question for you to answer is how much it would cost you if a cross-client data breach happened. If it's not a big deal, probably separation in business logic is ok. If it means going out of business, it is definitely not enough.
As for the user used for the connection should be root - no, definitely not. Following the principle of least privilege, you should use a user that only has rights to the things it needs to, ie. connecting to that database and nothing else.
I am not a programmer but I have an idea that I would like to see developed. I want to have a cross platform web app that is programmed to, for any DNS look-up request from any app on the device (even native apps), first look in our DNS server that will check to see if the service provider is a member of our system, and if so, a different experience will be delivered to the user, and if not, then the user's device should be forwarded to the normal DNS that is specified in system settings. Is this feasible? Are there any risks to the users or me? Can the code be safe from being tampered with?
Many thanks.
What you are doing is not easily feasible for a web application. In effect what you are doing is running your own DNS server which the users connect to and if the website provider is a member you already have their DNS records loaded and it providers one set of records and if the website is not a member then it performs a forward lookup to an upstream provider to get the global DNS records for the DNS query made. I have implemented this for a number of small and medium businesses on their local networks so that queries to certain domains from the LAN resolve to internal addresses, both for the purpose of blocking domains from being accessed from work as well as for connecting the users to local servers where the domain in question is hosted locally, however to do this for client devices not on a single network would mean you would have to either install software to change the DNS settings on the device or to have the user change their DNS settings themselves which would not give you a unified experience as some would and some wouldn't, especially if you are talking about members of the public and their own devices. If memory serves there are also restrictions in place on mobile devices including Android and iOS devices which prevent an app from altering network settings such as DNS as a security precaution as such an app would present a huge risk to user online security. The best bet would be to simply provide DNS hosting for service providers and they host their DNS records with you and so you can present the enhanced experience to the end user.
I want to figure out how many users can ADFS 2.0 stand-alone server support. I mean load of the server. My customers said that it supports just 100 users (seems strange for a server and so simple operations) and they have 700 users at the same time.
So he recomended to have a federation farm instead of stand-alone server. But I prefere to check first.
So, can you share info about load limitations of stand-alon SSO server VS server farms?
Any docs, articles with numbers, experts ideas or so on...
We have standalone servers that support WAY more than 100 users - easily over 1000.
Not sure what the upper limit is?
A farm is only going to help if you have a load balancer in front of them
I also found an article about this problem.
The auther writes that they use 2 ADFS servers for 10 000 users.
And there is a calculator to get number of servers depending on the load and users
There is no such restriction and it solely depends on how often users login to your system.
We have few deployments of the adfs, one of them supports like 50000 users and only TWO servers are enough. I even suspect one would do however this is, as always, not a good idea to have just one server (at least two servers = failover, you wouldn't want the whole environment to be inaccessible just because your login server just died).
The idea would be then to start with two servers and monitor the infrastructure. Add other instances only when necessary.
I need your suggestion for the following stuff of Multitenancy:
Actually I need to achieve multitenancy for my app. I've got it for using traditional flow with use of DB/Schema (eg. separate schema for each tenant).
Now I need to integrate user validation from LDAP and multitenancy stuff as well.
So What I am thinking is that If I store User info + DB/Schema info (DB connectivity info) in LDAP Server for more dynamic nature of the app. Because with this I would be able to connect any DB/Schema (irrespective of their physical location).
What's your opinion for this approach, Would it be really feasible?
If there is any cons in your mind, please share.
Thanks & Regards.
It sounds like you are trying to host multiple clients' systems on your system and each client may have multiple connections from multiple locations. From your question it sounds like you are putting one customer per database though databases may not be on the same cluster.
The first thing to do is to lock down PostgreSQL appropriately in the pg_hba.conf and expose only those database/user combos you want to expose. If you are going this route, LDAP sounds sane to me for publishing the connection info, since it means you can control who can access which accounts. Another option, possibly closely tied, would be to issue SSL certs and use certificate authentication by clients, so they can be issued a cert and use it to connect. You could even authenticate PostgreSQL against LDAP.
So there are a lot of options here for using these two together once you have things otherwise properly secured. Best of luck.
We are setting up a citrix solution for co-workers from an external partner to access applications in our organisation. The question is if it's a bad idea to allow Citrix Client Drive mapping from a security perspective?
Does anyone know of any best practices?
We have no control over the state(of for example antivirus software) of the clients from where they connect or their network.
This is probably a question for the Citrix forums, but here are my 2 cents:
With Citrix XenApp you can granularly control which level of data exchange between the client (where the user sits) and the server (where applications are executed and data is stored) you want to allow. One extreme is to disable every form of exchange, including the clipboard. In such a scenario the only way users can copy data from the server is via screenshots.
The other extreme is to allow everything including clipboard and client drive mapping. In that case you can copy data to and fro, both via the clipboard and via the file system.
There is no best practice, you need to define which level of security you want and act accordingly. But beware: think of the users, too, and do not restrict them unnecessarily.