Wildfly load-balancing across multiple machines - jboss

I am trying to configure a wildfly server with a load balancer for learning purposes. Here's what I got:
Three VMs, only accessible by their IPs.
One is 152.238.224.58 - my load balancer
An other one is 152.238.224.59 - my first backend server
The last one is 152.238.224.60 - my second backend server
I find the wildfly documentation to be rather poor, but after watching Stuart Douglas's explanation on how the load balancer works, I currently have my first VM running a cluster of servers. Load balancing works, but everything is on the same VM (the first one). What I'd rather have is the load balancer acting as a proxy for the two backend servers.
I've tried the method described on the Wildfly documentation but didn't manage to make it work.
What would I need to do to have the first VM load-balancing across the two second VMs? To go even further, how difficult would it be to have the first VM act as a load-balancer between VM-2 and VM-3, where VM-2 and VM-3 are clusters (would they then have their own load-balancer?)?
Thanks a lot for any indication.

From WildFly version 10.1 there is a load balancer profile as a part of WildFly installation. Just use it. I'm providing sample steps here (based on my demo scripts for MS Azure).
Load balancer
Use the standalone-load-balancer.xml profile for the load balancer. WildFly 10.1 has the profile within the examples. WildFly 11 has it as a standard profile in the configuration directory.
WILDFLY_HOME=/path/to/wildfly
# MY_IP=$(ip route get 8.8.8.8 | awk '{print $NF; exit}')
MY_IP=152.238.224.58
# Skip following command in WildFly 11
cp $WILDFLY_HOME/docs/examples/configs/standalone-load-balancer.xml \
$WILDFLY_HOME/standalone/configuration/
# run the load balancer profile
$WILDFLY_HOME/bin/standalone.sh -b $MY_IP -bprivate $MY_IP -c standalone-load-balancer.xml
This script uses for communication between worker nodes and load balancer public network. If you want to use a private network (highly recommended), then set the correct IP address of the balancer for private interface (-bprivate).
Worker nodes
Run the server with the HA (or Full HA) profile, which has modcluster component included. If the UDP multicast is working in your environment, the workers should work out of the box without any change. If it's not the case, then configure the IP address of the load-balancer statically.
WILDFLY_HOME=/path/to/wildfly
MY_IP=$(ip route get 8.8.8.8 | awk '{print $NF; exit}')
# Configure static load balancer IP address.
# This is necessary when UDP multicast doesn't work in your environment.
LOAD_BALANCER_IP=152.238.224.58
$WILDFLY_HOME/bin/jboss-cli.sh <<EOT
embed-server -c=standalone-ha.xml
/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=advertise,value=false)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=proxy1:add(host=$LOAD_BALANCER_IP,port=8090)
/subsystem=modcluster/mod-cluster-config=configuration:list-add(name=proxies,value=proxy1)
EOT
# start the woker node with HA profile
$WILDFLY_HOME/bin/standalone.sh -c standalone-ha.xml -b $MY_IP -bprivate $MY_IP
Again, to make it safe, you should configure MY_IP as an address from the private network.

Related

Log activity monitoring on ELK box within EKS

I have configured an ELK stack server with filebeat which monitors logs across several nodes within the EKS cluster on AWS.
I would like to expose the Kibana dashboard so that I can view these logs. As the machine containing ELK stack has a private IP address (no public IP), how can I expose it to outside access so that I can view it from my desktop? There were recommendations to follow, however, the 1st one and 3rd one don't work quite well, whereas 2nd one is not preferred.
setup ingress onto ELK machine
setup DNS entry to have Route 53 entry point to the IP address of ELK machine
port forwarding
I would appreciate some insight into a potential solution.
Simply setting up port-forwarding on host machine worked for me.
ssh -v -N -L <local port>:<elk_host>:<remote port> <jump box>

I am not able expose a service in kubernetes cluster to the internet

I have created a simple hello world service in my kubernetes cluster. I am not using any cloud provider and have created it in a simple Ubuntu 16.04 server from scratch.
I am able to access the service inside the cluster but now when I want to expose it to the internet, it does not work.
Here is the yml file - deployment.yml
And this is the result of the command - kubectl get all:
Now when I am trying to access the external IP with the port in my browser, i.e., 172.31.8.110:8080, it does not work.
NOTE: I also tried the NodePort Service Type, but then it does not provide any external IP to me. The state remains pending under the "External IP" tab when I do "kubectl get services".
How to resolve this??
I believe you might have a mix of networking problems tied together.
First of all, 172.31.8.110 belongs to a private network, and it is not routable via Internet. So make sure that the location you are trying to browse from can reach the destination (i.e. same private network).
As a quick test you can make an ssh connection to your master node and then check if you can open the page:
curl 172.31.8.110:8080
In order to expose it to Internet, you need a to use a public IP for your master node, not internal one. Then update your Service externalIPs accordingly.
Also make sure that your firewall allows network connections from public Internet to 8080 on master node.
In any case I suggest that you use this configuration for testing purposes only, as it is generally bad idea to use master node for service exposure, because this applies extra networking load on the master and widens security surface. Use something like an Ingress controller (like Nginx or other) + Ingress resource instead.
One option is also to do SSH local port forwarding.
ssh -L <local-port><private-ip-on-your-server><remote-port> <ip-of-your-server>
So in your case for example:
ssh -L 8888:172.31.8.110:8080 <ip-of-your-ubuntu-server>
Then you can simply go to your browser and configure a SOCKS Proxy for localhost:8888.
Then you can access the site on http://localhost:8888 .

Internal and External reverse proxy network using kubernetes and traefik, how?

I am trying to learn kubernetes and rancher. Here is what i want to accomplish :
I have few docker containers which i want to service only from my internal network using x.mydomain.com
I have same as above but those containers will be accessible from internet on x.mydomain.com
What i have at the moment is following :
Rancher server
RancherOS to be used for the cluster and as one node
I have made a cluster and added the node from 2. and disabled the nginx controller.
Install traefik app
I have forwarded port 80, 443 to my node.
Added few containers
Added ingress rules
So at the moments it works with the external network. I can write app1.mydomain.com from the internet and everything works as it should.
Now my problem is how can i add the internal network now ?
Do i create another cluster ? Another node on the same host ? Should i install two traefik and then use class in ingress for the internal stuff ?
My idea was to add another ip to the same interface on the rancheros then add another node on the same host but with the other ip but i can’t get it to work. Rancher sees both nodes with the same name and doesn’t use the information i give it i mean --address when creating the node. Of course even when i do this it would require that i setup a DNS server internally so it knows which domains are served internally but i haven’t done that yet since i can’t seem to figure out how to handle the two ip on the host and use them in two different nodes. I am unsure what is require, maybe it’s the wrong route i am going.
I would appreciate if somebody had some ideas.
Update :
I thought i had made it clear what i want from above. There is no YAML at the moment since i don't know how to do it. In my head it's simple what i want. Let me try to cook it down with an example :
I want 2 docker containers with web server to be able to be accessible from the internet on web1.mydomain.com and web2.mydomain.com and at the same time i want 2 docker containers with web server that i can access only from internal network on web3.mydomain.com and web4.mydomain.com.
Additional info :
- I only have one host that will be hosting the services.
- I only have one public IPv4 address.
- I can add additional ip alias to the one host i have.
- I can if needed configure an internal DNS server if required.
/donnib

how to get Kubernetes Pods to use a transparent SOCKS5 proxy for certain connections?

I have a Kubernetes cluster (Kubernetes 1.13, Weave Net CNI) that has no direct access to an internal company network. There is an authentication-free SOCKS5 proxy that can (only) be reached from the cluster, and which resolves and connects to resources in the internal network:
Consider some 3rd party Docker Images used on Pods that don't have any explicit proxy support, and just want a resolvable DNS name and target port to connect to a TCP-based service (which might be HTTP(S), but doesn't have to be).
What kind of setup would you propose to bind the Pods and Company Network Services together?
The only two things comes to my mind are:
1) Run the Socks5 docker image as a sidecar: https://hub.docker.com/r/serjs/go-socks5-proxy/
2) Use Transparent Proxy Redirector on the nodes - https://github.com/darkk/redsocks

Connecting client to JBoss cluster

I am new to JBoss. Basically I have managed to cluster 2 nodes with:
Node 1: run.bat -c all -g DefaultPartition –u 230.0.0.4 -b 10.67.51.28 -Djboss.messaging.ServerPeerID=1
Node 2: run.bat -c all -g DefaultPartition –u 230.0.0.4 -b 10.67.50.21 -Djboss.messaging.ServerPeerID=2
I know that if i cofigure a Apache load balancing(Mod_JK) to sit infront of the cluster, the client simply just punch in the IP of the Apache, and Apache will redirect the traffic to the nodes.
But I do not want to have a Apache infront of the cluster. So how do my client access the cluster?? Do i need to configure something in JBoss, or isit a MUST to have a load balancer for the client to access the cluster??
MANY thanks in advance....
Apache is not strictly needed to perform failover. But you will need some infrastructure level to redirect the requests to the other server when the first one is down.
To achieve failover with JBoss; the default is to use several JBoss nodes (in cluster mode to replicate session data) and in-front a network http level infrastructure that route the request to the correct JBoss instance. Several routing strategies can be performed e.g. load balancing the session base on available nodes (the default used by most Java EE systems) or one node taking all load and ip change done automatically if the environment detect one node is down.
The first one is provided by the mod_jk and is probably simpler at a correct price.
To perform high availability you will need a complete redondent infrastructure, router, switch etc. and several reverse proxy (the Apache node) below a hardware HA load balancer.
If you only have 2 JBoss node, how the request going to the down node will be rerouted to the fail over node?
If it helps, re-brand the Apache node to "Fail over request router"...