Sip Servlets clustering on wildfly - wildfly

I have started using Mobicents (aka Restcomm) sip servlets with Wildfly 10 but even i could not find a clear answers in documentation or anywhere else for questions i have below.
1) Do sip servlets support standalone-ha model such as in a sip dialog fail over scenario? For example in an established call, if node one fails will node to accept subsequent invites or return a 404 like response .
2) Is it required to use Mobicents (aka Restcomm) load balancer even if there is a sip load balancer in front of the servers ?
3) Does the sip application configuration,code etc.. change between standalone and standalone-ha mode? Or it is handled by Wildfly, ActiveMQ, Infinispan?
Thanks

1) Restcomm SIP Servlets on Wildfly 10 doesn't support replication yet. Only Restcomm SIP Servlets on Wildfly 7 supported product from TeleStax supports replication, Wildfly 10 is expected to be supported later this year.
2) Nope but you need to make sure your load balancer can support SIP Session affinity to always route messages from a given dialog to the same node.
3) No changes should be needed to the application. Only be conscious of what you replicate as it adds overhead in terms of network traffic and memory usage.

Related

Web sockets and Rest API in same Tomcat based application

I have read up on web sockets providing full duplex connections over TCP which can be used in scenarios where long polling was used to get live updates to client from server. Now I have a Tomcat based application which serves multiple REST based web service response, and I want couple of API's to be implemented using web sockets say to render dashboard with latest data where multiple users are working on them concurrently, is that possible ? My concern here was even if the connection was upgraded to TCP from HTTP wouldn't web socket require a separate port to run than the default Tomcat port 8080. In that case should I house the Web Socket based endpoints separate to the Tomcat based application already running. Please do correct me if any of the above is wrong.
A couple of month ago, I wrote a small Spring Boot webapp with embedded Tomcat that provides both, REST endpoints and websocket support, and both via the same port. So, yes that works... if you wanna sneak a peek: https://github.com/tommybrettschneider/pinterest-boot
Besides that, this post should also clarify things:
Shall I use WebSocket on ports other than 80?

CoAP and MQTT support in Wildfly 8.0.0.Final

We have an enterprise solution deployed on Wildfly 8.0.0 server, we intend to support CoAP and MQTT also as communication protocol. We explored extensively but we couldn't find even any oblique reference to our problem case. Is it possible to add CoAP and MQTT support without destabilizing Wildfly setup ??
I think theoretically it is possible to use a https://www.eclipse.org/californium/ within an EE server for CoAP.
The main issue here is that Californium listens to an UDP port (and sends datagrams too).
So if you want to stay within a EE specification, you'll have to implement a JCA adapter for that.
If you want things to just work, you can run/manage it from a JMX bean.
WildFly being a Web Server doesn't necessarily need to support CoAP or MQTT because those are not standard HTTP based Communication protocols but protocols designed to enable M2M (Machine to Machine) Communication.
As of WildFly 8.0.0.Final it only allows HTTP (Servlet, JAX-RS, JAX-WS), Web Sockets, HTTP Upgraded Remoting (EJB Invocation, Remote JNDI).

Multiple load balancers Tomcat is it Viable?

Question:
Could HAProxy and Mod_Cluster and Tomcat be used together ?
Either Mod_Cluster+Tomcat or HAProxy+Tomcat but not HAProxy+Mod_Cluster+Tomcat unless we are setting up multiple load balancer correct?
You can chain mod_cluster Apache HTTP Server balancer behind HAProxy, both in TCP and HTTP mode, in front of Tomcat. You could also use mod_cluster Apache HTTP Server balancer and HAProxy side-by-side, having them both sending requests to your Tomcats. The latter makes much less sense though.
If you tell me what would you like to achieve in which environment, I could help you architect the right solution. For instance: Azure, multiple datacentres, VMs, clients are plain HTTP embedded devices or Docker containers on Tutum/Docker Cloud, clients use HTTPS and certificate for authentication. You can / cannot use AJP internally between mod_cluster and Tomcat etc. Help me help you :-) -K-

How to JBoss/Blazeds clustering and channel failover

I'm stuck with jboss and blazeds clusterization.
What I have now is :
2 Jboss Instances, running in all mode
One load balancer with apache and mod_jk, as suggested by Jboss docs
A spring/flex integration app
A flex application that I do not want to throw errors when one of my JBoss instances falls
I find Adobe documentation really lacking, and being new at clustering, jgroups and balancing I cannot find how to deploy my app in clustered environment.
Actually this solution is working fine with remote calls. If one of jboss instances goes down the rpc gets routed to the other instance. What is not working are push messages, cause if client is connected to JBossA, and JBossA goes down, client displays an error message, stating that it can't reach JBossA, when it should failover to JBossB, without the user noticing anything.
From what I understood if configured correctly blazeds should tell flex client about failover servers upon connection. Then if flex client can't connect to the main server it goes to another. But the hard part for me is getting there.
Can someone point me to the right direction?
Thanks in advance
If you have an apache web server sitting between the clients and the JBoss servers, with mod_jk handling communication between apache and JBoss, then that should be your failover requirements met already.
mod_jk will detect if any of the JBoss servers fails, and send requests to the other one. As far as the clients are concerned, they see a single server, which is the apache server. They see nothing of the JBoss servers behind it.
I know nothing about BlazeDS clustering, but I'm guessing it has some form of manual failover mechanism, which it tells clients about a list of server addresses, and the clients pick one that works. This should only be necessary if you don't have a mod_jk middleman, so hopefully you can just ignore the blazeds clustering.
Things can, of course, get a lot more complicated, such as when you need to JBoss servers to commnunicate amongst themselves (e.g. session replication, clustered JMS, distributed caching, etc), but if you don't need any of that, then you can safely ignore it.

Highly available standalone java server built using J2SE

What is the best way to make a standalone java server built using J2SE Socket API high available? Using an HTTP server would have been a good choice specially for the built-in features e.g. security, clustering, transactions, etc. but the server should be capable of accepting TCP/IP socket connection from java & non-java clients (mainly legacy). Tomcat does not accept non-http TCP/IP requests? Moreover this post points out servlet for implementing socket connection it's not a good practice. What would be good approach?
After exploring online, this is what I have compe up with. A standalone java application can be made high available by using a combination of the following:
2 VM deployed with HAproxy and keepalived to form the highly available load balancing layer.
Keepalived will keep the load balancers in active-passive mode and the HAproxy will forward the requests to a cluster of backend socket based java server apps
At least 2 VM deployed with the custom socket based java server apps. The HAproxy servers will distribute the requests over these 2 VMs
Use at least 2 terracotta server to share the java server apps. Terracotta will provide the sharing of the memory and help the custom java servers to scale.
Use MySQL NDB Cluster for the database.
Any suggestions?