At the site Apache Knox Gateway 0.13.x User’s Guide(http://knox.apache.org/books/knox-0-13-0/user-guide.html#High+Availability).
I want to configure the Knox High Availability and the first step in the site is to Configure Knox instances which are below
All Knox instances must be synced to use the same topology credential keystores. >These files are located under >{GATEWAY_HOME}/conf/security/keystores/{TOPOLOGY_NAME}-credentials.jceks. They >are generated after the first topology deployment. Currently these files need to >be synced manually. Here are the steps to sync topologies credentials keystores:
Choose a Knox instance that will be the source for topology credential keystores. Let’s call it keystores master
Replace the topology credential keystores in the other Knox instances with topology credential keystores from the keystores master
Restart Knox instances
I want to know how to finishe it.
Do I remove all the /{TOPOLOGY_NAME}-credentials.jceks and keep one ?
Basically, you choose one of the Knox instances that will be a master (nothing special in being master). You then backup topology credential keystores in the other Knox instances as mentioned in the guide. Then copy the master topology credential keystores into the other instances.
Do not delete anything. and restart all Knox instances.
Related
I’ve been looking into Keycloak as an on-prem IAM and SSO solution for my company. One thing that I’m unclear on from reading the documentation is if Keycloak’s clustered mode can handle our requirements for instance federation across sites.
We have some remote manned sites that occasionally run critical telemetry-gathering processes. Our AD domain is replicated to those sites.
The issue is that there is a single internet link to the sites. If we had keycloak at the main office, and the internet link went down for a day, any software at the remote site that relies on keycloak to authenticate wouldn’t work (which would be a big problem).
Can we set up Keycloak in a cluster mode (ie, putting an instance at each site), so that if this link went out, remote users are able to connect to their local instance automatically and authenticate with local apps? What happens when the connection is restored and the databases are out of sync - does keycloak automatically repair this?
Cheers
In general answer is "yes", you can setup two keycloak instances in different locations, and link them with each other via cluster (under the hood it would be infinispan cache replication). But it depends on details of your infrastructure.
Main goal of Keycloak cluster is to perform sessions cache replication between nodes. So in simplest case you can setup two nodes that looks to same DB instance, and when first node goes down second would handle whole job, but if DB also goes down second node would be useless. In such case each site should have both separate Keycloak node and DB replica (how to achieve DB replication is out of scope of this topic). Third option is to use multitenancy feature of keycloak application adapter, in that case you secure application by two separate Keycloak instances, that know nothing about each other.
Try to start from this documentation article:
https://www.keycloak.org/docs/latest/server_installation/index.html#crossdc-mode
FoundationDB cluster can be configured to use SSL/TLS but is it possible to connect to a cluster without knowing cluster's fdb.cluster file?
In other words, is fdb.cluster file equivalent to username/password security scheme in other database systems?
The fdb.cluster file is composed of 2 main parts (see Cluster file format):
The cluster id (with optional description)
A list of one or more coordinators IP:PORT pairs.
Any client must be capable of reaching at least one of the coordinators in the list to talk to the cluster, and it must have the correct cluster ID. If not, it will not be able to connect. There are no built-in service discovery.
This means that you have to provide yourself the initial cluster file to your application. Once it connects to a coordinator node, it will be able to obtain the complete list and update the cluster file by itself (if the topology changes).
One solution for deployment, is to have your application (or deployment script) download the latest fdb.cluster from an internal URL (or file share) if the file is missing, to jump start the setup.
Regarding authentication, unless you use TLS/SSL, the "id" part in the cluster file acts as a pseudo clear-text password. Even if you have the correct set of coordinators , the application must also have the proper cluster ID.
Though it should be considered as the equivalent of the database name in a typical SQL server. It can be easily found and is transmitted in clear if you don't use SSL. I guess it is there to prevent silly mistakes, more than anything else (ex: typing the IP:PORT of a different cluster).
You can't connect without the cluster file. This does provide some weak security but it's much better to use the mutual TLS support if you want to run a cluster in an untrusted network.
I have setup an ESB cluster using jdbc connections to ms sql databases for local and remotely mounted config and gov registries. 1x mgt and 2xworker
Our .car file contains some ws-security policy artifacts which go to config. When I deploy to mgt it deploys OK. I have SVN dep sync setup to the cluster and when it picks up the .car it starts to deploy on the worker but fails when loading the policy files into conf. It is trying to duplicate the policy in the shared conf and fails - of course that is right but; how should I deploy these 'shared' artifacts when a .car file is distributed by svn? I need to be able to control the deploy properly. The only way I can see is via the dev studio which is terrible for our change management practice.
Thanks for you help.
I can recommend multiple solutions. You can decide what to choose from them.
Since you have only 2 worker nodes, you can get rid of (disable) deployment synchronization and deploy the car files to all the nodes. I believe you have some automated process, so it wont be a problem to deploy to all nodes. While doing so, modify your project to bundle the policies to a separate car file and the services to another. When deploying, you deploy the policies only to management node and the services to all nodes.
Second option is to, add the policies to local registry. i.e. Not the config registry, not the governance registry. Then, when you deploy the car to the management node, it will add the policies to local registry of the management node. When the car file is dep-synced, worker nodes will deploy them and they will add the policies to their local registry. This will avoid the worker nodes trying to add the policies to the same location.
By going through the question, I felt you have external databases to the local registry too. But, its not necessary. You can use the internal H2 database for the local registry. H2 databases sometimes get corrupted. If such a thing happens, all you have to do is, delete the H2 database and restart the server with -Dsetup option. Having an external DB is fine. But, thats an overkill.
Got a general question about load balancing setup in JBoss (7.1.1.Final). I'm trying to setup a clustered JBoss instance with a master and slave node and I'm using the demo app here (https://docs.jboss.org/author/display/AS72/AS7+Cluster+Howto) to prove the load balancing/session replication. I've basically followed through to just before the 'cluster configuration' section.
I've got the app deployed to the master and slave nodes and if I hit their individual IPs directly I can access the application fine. According to the JBoss logs and admin console the slave has successfully connected to the master. However, if I put something in the session on the slave, take the slave offline, the master cannot read the item that the slave put in the session.
This is where I need some help with the general setup. Do I have to have a separate apache httpd instance sat in front of JBoss to do the load balancing? I thought there was a load balancing capability built into JBoss that wouldn't need the separate server, or am I just completely wrong? If I don't need apache, please could you point me in the direction of instructions to setup the JBoss load balancing?
Thanks.
Yes, you need a Apache or any other software or hardware that allows you to perform load balancing of the HTTP request JBoss Application Server does not provide this functionality.
For proper operation of the session replication you should check that the server configuration and the application configuration is well defined.
On the server must have the cache enabled for session replication (you can use standalone-ha.xml or standalone-full-ha.xml file for initial config).
To configuring the application to replicate the HTTP session is done by adding the <distributable/> element to the web.xml.
You can see a full example in http://blog.akquinet.de/2012/06/21/clustering-in-jboss-as7eap-6/
So I am a little confused by reading the documents.
I want to setup AppFabric caching and hosting.
Can I do the following?
DC
SQL Server
AppFabric1
AppFabric2
All these computers are joined to the DC.
I want to be able to have AppFabric1 be the mainhost but also part of the cache cluster?
What about AppFabric2? or AppFabricX? How can I make them part of the cache cluster?
Do I have to make AppFabric1 and AppFabric2 configured in Windows as part of a cluster (i.e setup the entire environment as a cluster)?
Can I install AppFabric independently on AppFabric1 and 2 and have them cluster together and "make it work"? If so - how?
I see documentation about setting it up in a webfarm but also a workgroup... and that's it. nothing about computers joined to a domain.
I want to setup AppFabric caching and hosting.
Caching and Hosting are two totaly different things and generally don't share the same use cases.
AppFabric Caching provides an in-memory, distributed cache platform for Windows Server, previously named Velocity. The cache cluster is a collection of one or more instances of the Caching Service working together. You can easily add new cache host without restarting the cluster in the "storage location" (xml or sql server).
Can I install AppFabric independently on AppFabric1 and 2 and have
them cluster together and "make it work"? If so - how?
Don't worry... this can be done easily during installation. In addition, there are powerfull PS module to to the same thing.
AppFabric Hosting enhance the hosting of WCF and Workflow Foundation services in WAS (autostart, monitoring of hosted services, workflow persistence, ...). There is no cluster here and basically you just have to configure to monitoring/persistence DB for each server.
Just try it !
When you are adding the second node in the AppFabric cluster, make sure to choose the option Join Cluster (instead of New Cluster) and point to the path of the share where you stored the configuration (assuming that you used FILE SHARE to store the configuration of the cluster). The share that you used should be accessible from Appfabric2.