How is group membership determined in a Wildfly cluster of standalone servers? - wildfly

I have seen that a cluster can be formed/started very easily with Wildlfy. Is it possible using "standalone" configuration to create multiple clusters? That is some servers should only be part of a cluster named "cluster1" and other servers should form a different cluster named "cluster2". That is can a group name or similar be provided or configured? (I am not looking for a managed domain setup).

By specifying different multicast-addresses, that is each cluster has its own multicast-address. Port offsets don't affect the cluster but might be necessary to avoid port conflicts (when starting multiple Wildfly instances on a single machine).
jboss.default.multicast.address //controls the multicast address

Related

Adding multiple services of same type to Ambari

I am using Ambari to manage my Kafka cluster. I want to create another cluster which uses the same zookeeper of the previous cluster but is independent otherwise. I want to use the same Ambari service(UI) for this new one also. Is this possible?
It's possible to define a host config group in Ambari, such that a subset of hosts share similar configurations (such as a different ZK chroot for individual Kafka clusters), however, for any operation like service restarts and general display on the Kafka service page, different host groups would not be divided apart
In my experience, the host group feature has only been used for when some HDFS nodes have more disks attached, or more memory than others, so YARN and MapReduce settings were increased.
If you really need multiple isolated clusters, that's where external configuration management comes into play

VPN access for applications running inside a shared Kubernetes cluster

We are currently providing our software as a software-as-a-service on Amazon EC2 machines. Our software is a microservice-based application with around 20 different services.
For bigger customers we use dedicated installations on a dedicated set of VMs, the number of VMs (and number of instances of our microservices) depending on the customer's requirements. A common requirement of any larger customer is that our software needs access to the customer's datacenter (e.g., for LDAP access). So far, we solved this using Amazon's virtual private gateway feature.
Now we want to move our SaaS deployments to Kubernetes. Of course we could just create a Kubernetes cluster across an individual customer's VMs (e.g., using kops), but that would offer little benefit.
Instead, perspectively, we would like to run a single large Kubernetes cluster on which we deploy the individual customer installations into dedicated namespaces, that way increasing resource utilization and lowering cost compared to the fixed allocation of machines to customers that we have today.
From the Kubernetes side of things, our software works fine already, we can deploy multiple installations to one cluster just fine. An open topic is however the VPN access. What we would need is a way to allow all pods in a customer's namespace access to the customer's VPN, but not to any other customers' VPNs.
When googleing for the topic, I found approaches that add a VPN client to the individual container (e.g., https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/) which is obviously not an option).
Other approaches seem to describe running a VPN server inside K8s (which is also not what we need).
Again others (like the "Strongswan IPSec VPN service", https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/ ) use DaemonSets to "configure routing on each of the worker nodes". This also does not seem like a solution that is acceptable to us, since that would allow all pods (irrespective of the namespace they are in) on a worker node access to the respective VPN... and would also not work well if we have dozens of customer installations each requiring its own VPN setup on the cluster.
Is there any approach or solution that provides what we need, .i.e., VPN access for the pods in a specific namespace only?
Or are there any other approaches that could still satisfy our requirement (lower cost due to Kubernetes worker nodes being shared between customers)?
For LDAP access, one option might be to setup a kind of LDAP proxy, so that only this proxy would need to have VPN access to the customer network (by running this proxy on a small dedicated VM for each customer, and then configuring the proxy as LDAP endpoint for the application). However, LDAP access is only one out of many aspects of connectivity that our application needs depending on the use case.
If your IPSec concentrator support VTI, it's possible route the traffic using firewall rules. For example, PFSense suports it: https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html.
Using VTI, you can direct traffic using some kind of policy routing: https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html
However, i can see two big problems here:
You cannot have two IPSEC tunnels with the conflicted networks. For example, your kube network is 192.168.0.0/24 and you have two customers: A (172.12.0.0/24) and B (172.12.0.0/12). Unfortunelly, this can happen (unless your customer be able to NAT those networks).
Find the ideals criteria for rule match (to allow the routing), since your source network are always the same. Use mark packages (using iptables mangle or even through application) can be a option, but you will still get stucked on the first problem.
A similar scenario is founded on WSO2 (API gateway provider) architecture. They solved it using reverse-proxy in each network (sad but true) https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN
Regards,
UPDATE:
I don't know if you use GKE. If yes, maybe use Alias-IP can be an option: https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips. The PODs IPs will be routable from VPC. So, you can apply some kind of routing policy based on their CIDR.

Apache Artemis: How can i create Durable Subscription for Static Clustered

here is example of clustered-durable-subscription and here is clustered-static-discovery, In clustered-static-discovery connecting with only one server (cluster auto connected with another server using cluster configuration).
As per doc
Normally durable subscriptions exist on a single node and can only
have one subscriber at any one time, however, with ActiveMQ Artemis
it's possible to create durable subscription instances with the same
name and client-id on different nodes of the cluster, and consume from
them simultaneously. This allows the work of processing messages from
a durable subscription to be spread across the cluster in a similar
way to how JMS Queues can be load balanced across the cluster
Should i need to add additional config for static cluster, or durable-subscription will work fine with static cluster without set the client id and subscription for all node of(As i have mentioned in static cluster we only make connection with one node)
The "static" part of the "clustered-static-discovery" really only refers to cluster node discovery (as the name suggests). Once the cluster nodes are discovered and the cluster is formed then the cluster will behave the same as if the discovery were dynamic (e.g. using UDP multicast). In other words, a clustered durable subscription should work the same no matter what mechanism was used on the server-side for cluster node discovery.

Can I create a GCP cluster with different machine types?

I'd like to create a cluster with two different machine types.
How would I go about doing this? What documentation is available?
I assume you are talking about a Google Container Engine cluster.
You can have machines of different types by having more than one node pool.
If you are creating the cluster in the Console, start by creating it with one node pool and after it is created edit the cluster to add a second node pool with different instance configuration. This is necessary because the UI only allows one node pool at creation.

zookeeper initial discovery

I want to use Apache Zookeeper (or Curator) as a replicated naming service. Let's say I run 3 zookeeper servers and I have a dozen of computers with different applications which can connect to these servers.
How should I communicate zookeeper IP addresses to clients? A configuration file which should be distributed manually to each machine?
Corba Naming service had an option of UDP broadcast discovery in which case no configuration file is needed. Is there a similar possibility in Zookeeper?
It depends where/how you are deploying. If this is at AWS you can use Route 53 or elastic IPs. In general, the solution is some kind of DNS. i.e. a well known hostname for each of the ZK instances.
If you use something like Exhibitor (disclaimer, I wrote it) it's easier in that Exhibitor can work with Apache Curator to provide up-to-date cluster information.