Connecting client to JBoss cluster - jboss

I am new to JBoss. Basically I have managed to cluster 2 nodes with:
Node 1: run.bat -c all -g DefaultPartition –u 230.0.0.4 -b 10.67.51.28 -Djboss.messaging.ServerPeerID=1
Node 2: run.bat -c all -g DefaultPartition –u 230.0.0.4 -b 10.67.50.21 -Djboss.messaging.ServerPeerID=2
I know that if i cofigure a Apache load balancing(Mod_JK) to sit infront of the cluster, the client simply just punch in the IP of the Apache, and Apache will redirect the traffic to the nodes.
But I do not want to have a Apache infront of the cluster. So how do my client access the cluster?? Do i need to configure something in JBoss, or isit a MUST to have a load balancer for the client to access the cluster??
MANY thanks in advance....

Apache is not strictly needed to perform failover. But you will need some infrastructure level to redirect the requests to the other server when the first one is down.
To achieve failover with JBoss; the default is to use several JBoss nodes (in cluster mode to replicate session data) and in-front a network http level infrastructure that route the request to the correct JBoss instance. Several routing strategies can be performed e.g. load balancing the session base on available nodes (the default used by most Java EE systems) or one node taking all load and ip change done automatically if the environment detect one node is down.
The first one is provided by the mod_jk and is probably simpler at a correct price.
To perform high availability you will need a complete redondent infrastructure, router, switch etc. and several reverse proxy (the Apache node) below a hardware HA load balancer.
If you only have 2 JBoss node, how the request going to the down node will be rerouted to the fail over node?
If it helps, re-brand the Apache node to "Fail over request router"...

Related

Is load balancer unnecessary for k3s embeded etcd HA solution

I have a same discussion in k3s github repository, but no one reply. Hope someone can give an answer here.
There are articles talking about the embedded etcd HA solution of k3s like this. One of the key behavior is adding a load balancer solution (EIP like this article or LB from the clound provider) between the agents and masters:
k3s agent --> load balancer --> master
And the architecture of k3s also show that a Fixed Registration Address is necessary.
While, after some research I found that k3s (at least v1.21.5+k3s2) have a internal agent load balancer (config at /var/lib/rancher/k3s/agent/etc/k3s-agent-load-balancer.yaml) which will auto update the master k8s api server list in it. So the out side load balancer is unnecessary?
I got a response from the k3s discussion:
https://github.com/k3s-io/k3s/discussions/4488#discussioncomment-1719009
Our documentation lists a requirement for a "fixed registration endpoint" so that nodes do not rely on a single server being online in order to join the cluster. This endpoint could be a load-balancer or a DNS alias, it's up to you. This is only needed when nodes are registering to the cluster; once they have successfully joined, they use the client load-balancer to communicate directly with the servers without going through the registration endpoint.
I think this is good enough to answer this question.
Yes, an external load balancer is still required to achieve a highly available setup with more than one master node.
Whenever you start a worker node or use the API, you should connect to the external load balancer to ensure you can connect to a running master node if one master is currently down.
The internal load balancer you mentioned above distributes any load within your cluster.

kubernetes service loadbalancing based on zone

Let us say I've two zones zone1 and zone2, having 2 apps deployed in each zone. Let us say App1 is a client which fetches information from App2, App1 connects to App2 using k8s service, Now how can I configure app1 of zone1 to connect to app2 of zone1(preferably, if app2 of zone1 is loaded or down connect to app2 of zon2).
Though this can be achieved by application layer using zuul and ribbon with headless service, I want to move this to infra layer. Is there any possibility to do in K8s.
I see IPVS supports Locality-Based Least Connection algorithm, but not sure k8s supports this algorithm, I see supported algos are rr, wrr, lc, sed. but no documentation regarding support for lblc. if lblc is supported is this better solution to prefer same node/pod in dc/pod in zone.
NOTE: This is solution is purely for on-prem k8s cluster.
I will answer only to part of your question, have no experience with "best practices" configuration in this area.
But what I want to share with you - is that kubernetes definitely supports Locality-Based Least Connection algorithm.
You can find this in the source code:
LocalityBasedLeastConnection IPVSSchedulerMethod = "lblc"
// LocalityBasedLeastConnectionWithReplication with Replication assigns jobs destined for the same IP address to the
// least-connection node in the server set for the IP address. If all the node in the server set are overloaded,
// it picks up a node with fewer jobs in the cluster and adds it to the sever set for the target.
// If the server set has not been modified for the specified time, the most loaded node is removed from the server set,
// in order to avoid high degree of replication.
You can find info on how to enable IPVS here: https://kubernetes.io/blog/2018/07/09/ipvs-based-in-cluster-load-balancing-deep-dive/
P.S. Above article doesnt contains no info about lblc but as per source code - k8s supports it.

How can I achieve an active/passive setup across multiple kubernetes clusters?

We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?
Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!

access k8s apis from outside

I want to access k8s api resources. my cluster is 1node cluster. kube-api server is listening on 8080 and 6443 port. curl localhost:8080/api/v1 inside node is working. if i hit :8080, its not working because some other service (eureka) is running on this port. this leaves me option to access :6443 . in order to do make api accessible, there are 2 ways.
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
2- make change in waeve (weave is available as service in k8s setup) so that my server can access k8s apis.
anyone of option is fine with me. any help will be appreciated .
my cluster is 1node cluster
One of those words does not mean what you think it does. If you haven't already encountered it, you will eventually discover that the memory and CPU pressure of attempting to run all the components of a kubernetes cluster on a single Node will cause memory exhaustion, and then lots of things won't work right with some pretty horrible error messages.
I can deeply appreciate wanting to start simple, but you will be much happier with a 3 machine cluster than trying to squeeze everything into a single machine. Not to mention the fact that only having a single machine won't surface any networking misconfigurations, which can be a separate frustration when you think everything is working correctly and only then go to scale your cluster up to more Nodes.
some other service (eureka) is running on this port.
Well, at the very real risk of stating the obvious: why not move one of those two services to listen on a separate port from one another? Many cluster provisioning tools (I love kubespray) have a configuration option that allows one to very easily adjust the insecure port used by the apiserver to be a port of your choosing. It can even be a privileged port (that is: less than 1024) because docker runs as root and thus can --publish a port using any number it likes.
If having the :8080 is so important to both pieces of software that it would be prohibitively costly to relocate the port, then consider binding the "eureka" software to the machine's IP and bind the kubernetes apiserver's insecure port to 127.0.0.1 (which is certainly the intent, anyway). If "eureka" is also running in docker, you can change its --publish to include an IP address on the "left hand side" to very cheaply do what I said: --publish ${the_ip}:8080:8080 (or whatever). If it is not using docker, there is still a pretty good chance that the software will accept a "bind address" or "bind host" through which you can enter the ip address, versus "0.0.0.0".
1- create service for kube-api with some specific port which will target 6443. For that ca.crt , key , token etc are required. How to create and configure such things so that i will be able to access api.
Every Pod running in your cluster has the option of declaring a serviceAccountName, which by default is default, and the effect of having a serviceAccountName is that every container in the Pod has access to those components you mentioned: the CA certificate and a JWT credential that enables the Pod to invoke the kubernetes API (which from within the cluster one can always access via: the kubernetes Service IP, the environment variable $KUBERNETES_SERVICE_HOST, or the hostname https://kubernetes -- assuming you are using kube-dns). Those serviceAccount credentials are automatically projected into the container at /var/run/secret/kubernetes.io without requiring that your Pod declare those volumeMounts explicitly.
So, if your concern is that one must have credentials from within the cluster, that concern can go away pretty quickly. If your concern is access from outside the cluster, there are a lot of ways to address that concern which don't directly involve creating all 3 parts of that equation.

Wildfly load-balancing across multiple machines

I am trying to configure a wildfly server with a load balancer for learning purposes. Here's what I got:
Three VMs, only accessible by their IPs.
One is 152.238.224.58 - my load balancer
An other one is 152.238.224.59 - my first backend server
The last one is 152.238.224.60 - my second backend server
I find the wildfly documentation to be rather poor, but after watching Stuart Douglas's explanation on how the load balancer works, I currently have my first VM running a cluster of servers. Load balancing works, but everything is on the same VM (the first one). What I'd rather have is the load balancer acting as a proxy for the two backend servers.
I've tried the method described on the Wildfly documentation but didn't manage to make it work.
What would I need to do to have the first VM load-balancing across the two second VMs? To go even further, how difficult would it be to have the first VM act as a load-balancer between VM-2 and VM-3, where VM-2 and VM-3 are clusters (would they then have their own load-balancer?)?
Thanks a lot for any indication.
From WildFly version 10.1 there is a load balancer profile as a part of WildFly installation. Just use it. I'm providing sample steps here (based on my demo scripts for MS Azure).
Load balancer
Use the standalone-load-balancer.xml profile for the load balancer. WildFly 10.1 has the profile within the examples. WildFly 11 has it as a standard profile in the configuration directory.
WILDFLY_HOME=/path/to/wildfly
# MY_IP=$(ip route get 8.8.8.8 | awk '{print $NF; exit}')
MY_IP=152.238.224.58
# Skip following command in WildFly 11
cp $WILDFLY_HOME/docs/examples/configs/standalone-load-balancer.xml \
$WILDFLY_HOME/standalone/configuration/
# run the load balancer profile
$WILDFLY_HOME/bin/standalone.sh -b $MY_IP -bprivate $MY_IP -c standalone-load-balancer.xml
This script uses for communication between worker nodes and load balancer public network. If you want to use a private network (highly recommended), then set the correct IP address of the balancer for private interface (-bprivate).
Worker nodes
Run the server with the HA (or Full HA) profile, which has modcluster component included. If the UDP multicast is working in your environment, the workers should work out of the box without any change. If it's not the case, then configure the IP address of the load-balancer statically.
WILDFLY_HOME=/path/to/wildfly
MY_IP=$(ip route get 8.8.8.8 | awk '{print $NF; exit}')
# Configure static load balancer IP address.
# This is necessary when UDP multicast doesn't work in your environment.
LOAD_BALANCER_IP=152.238.224.58
$WILDFLY_HOME/bin/jboss-cli.sh <<EOT
embed-server -c=standalone-ha.xml
/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=advertise,value=false)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=proxy1:add(host=$LOAD_BALANCER_IP,port=8090)
/subsystem=modcluster/mod-cluster-config=configuration:list-add(name=proxies,value=proxy1)
EOT
# start the woker node with HA profile
$WILDFLY_HOME/bin/standalone.sh -c standalone-ha.xml -b $MY_IP -bprivate $MY_IP
Again, to make it safe, you should configure MY_IP as an address from the private network.