JBoss 7.1.1 and the EJB 3.1 Timer Service - quartz-scheduler

I am thinking about porting a Spring Quartz based application to EJB 3.1 to see if EJB has improved. I am having problems understanding how fail-over works with the Schedule Timer Service. In Quartz, there are database tables which clustered Quartz instances use. If one node in your cluster crashes, jobs will still get executed on other nodes.
I have been looking at how the Timer Service persists things and it appears to use the file system of the server the Timer was created on. Is this true? I do not see how this would be possible as it would render the Timer Service unusable since it would not support failover.
So i must be missing something. Can anyone help me out with this?

The EJB timer service is simply not as advanced as Quartz (with or without Spring).
EJB timers are persisted to an unknown location. It may happen to be the file-system, but it could also be the Windows registry if you happen to be running on Windows, or it could be an LDAP server or whatever.
There was an issue on the EJB spec JIRA for some time about this, and it was discussed on the spec mailing list, but then it was brutally dropped and closed because no one bothered to reply anyone (perhaps because a lot of people were on vacation at the time). It's one of the lamest reasons to close an issue if you'd ask me, but I guess the spec lead sometimes must resort to such measures.
Anyway, in JBoss AS persisting happens to an embedded relational datasource, that on its turn writes to the filesystem. Via propriatary configuration you can point this datasource to any remote DB. Fail-over would have to come from propriatary JBoss functionality as well. Although EJB forbids lots of things for the sake of potential clustering, there's no explicit clustering support in the spec and thus specifically EJB timers are not cluster aware.

Not sure if this was available at the time of the question but you can use the 'cluster-ha-singleton' for this, it allows you to create a singleton timer that is invoked from a single cluster node, in case of failover of the chosen node a new node is elected to run the singleton (and therefore the timers)
http://www.jboss.org/quickstarts/eap/cluster-ha-singleton/
It mentions EAP but I am running on AS 7.2.0 fine, the jars are already included in /modules/org/jboss/

Related

Expressing that a service requires another

I'm new to k8s, so this question might be kind of weird, please correct me as necessary.
I have an application which requires a redis database. I know that I should configure it to connect to <redis service name>.<namespace> and the cluster DNS will get me to the right place, if it exists.
It feels to me like I want to express the relationship between the application and the database. Like I want to say that the application shouldn't be deployable until the database is there and working, and maybe that it's in an error state if the DB goes away. Is that something you'd normally do, and if so - how? I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.
Is the alternative to try to connect early and exit 1, so that the cluster keeps on retrying? Feels like that would work but it's not very declarative.
Design for resiliency
Modern applications and Kubernetes are (or should be) designed for resiliency. The applications should be designed without single point of failure and be resilient to changes in e.g. network topology. Also see Twelve factor-app: IV. Backing services.
This means that your Redis typically should be a cluster of e.g. 3 instances. It also means that your app should retry connections if connections fails - this can also happens same time after running - since upgrades of a cluster (or rolling upgrade of an app) is done by terminating one instance at a time meanwhile a new instance at a time is launched. E.g. the instance (of a cluster) that your app currently is connected to might go away and your app need to reconnect, perhaps establish a connection to a different instance in the same cluster.
SQL Databases and schemas
I can think of other instances: like with an SQL database you might need to create the tables your app wants to use at init time.
Yes, this is a different case. On Kubernetes your app is typically deployed with at least 2 replicas, or more (for high-availability reasons). You need to consider that when managing schema changes for your app. Common tools to manage the schema are Flyway or Liquibase and they can be run as Jobs. E.g. first launch a Job to create your DB-tables and after that deploy your app. And after some weeks you might want to change some tables and launch a new Job for this schema migration.
As you've seen, YAML objects can not express such dependencies. As suggested by #fabian-lopez, your application container may include an initContainer that would wait for dependencies to be available, before starting their main container.
Now, if you want a state machine, capable to provision a database, initialize its schema, maybe import some records, and only then create your application: you're looking for an operator. Then, you may use the operator-sdk ( https://github.com/operator-framework/operator-sdk ), or pretty much anything integrating with some Kubernetes cluster API.
I think Init Containers is something you could leverage for this use case
This is up to your application code, not something Kubernetes helps nor hinders.

Is it possible to use dolphinscheduler without zookeeper?

Zookeeper plays several roles in the open-source workflow framework dolphinscheduler, such as heartbeat detection among masters and workers, task queue,event listener and distributed lock.
dolphin-sche framework
Is it possible to replace it by using database (mysql)? The main reason is to simplify the project structure .
zookeeper in DS is mainly used as:
Task queue, for master sending tasks to worker
Lock, for the communication between host(masters and workers)
Event watcher. Master listens the event that worker added or removed
it costs to replace zk as mysql.
zk mainly assumes the responsibility of the registry and monitors the application status. zk is very mature in this area and is a recognized solution in the industry. If MySQL wants to do this, the technical implementation cost will be larger, and may not achieve the desired effect.
BTW, their team is currently working on the SPI development for the registry, and in later versions, perhaps you can use other components, such as etcd, to achieve similar functionality.
for now, MasterServer and the WorkerServer nodes in the system all use the Zookeeper for cluster management and fault tolerance. In addition, the system also performs event monitoring and distributed locking based on ZooKeeper. We have also implemented queues based on Redis, but we hope that DolphinScheduler relies on as few components as possible, so we finally removed the Redis implementation.
so now DolphinScheduler can't work fine without Zookeeper, maybe in the future.
DolphinScheduler System Architecture:
For more documents please refer: Official Document.

JBOSS AS 7 Load Balancing with Server Failover

I have 2 instances of Jboss servers running on eg: 127.0.0.1 and 127.0.0.2.
I have implemented Jboss load balancing, but am not sure how to achieve server failover. I do not have a webserver to monitor the heartbeat and hence using mod_cluster is out the question. Is there any way I can achieve failover using only the two available servers?
Any help would be appreciated. Thanks.
JBoss clustering automatically provides JNDI and EJB failover and also HTTP session replication.
If your JBoss AS nodes are in a cluster then the failover should just work.
The Documentation refers to an older version of JBoss (5.1) but it has clear descriptions of how JBoss clustering works.
You could spun up another instance to server as your domain controller, and the two instances you already have will be your hosts. Then you could go through the domain controller, and it will do the work for you. However, I haven't seen instances going down to often, it usually servers that do, and it looks like you are using just one server (i might be wrong) for both instances, so i would consider splitting it up.

Questions Concerning Using Celery with Multiple Load-Balanced Django Application Servers

I'm interested in using Celery for an app I'm working on. It all seems pretty straight forward, but I'm a little confused about what I need to do if I have multiple load balanced application servers. All of the documentation assumes that the broker will be on the same server as the application. Currently, all of my application servers sit behind an Amazon ELB and tasks need to be able to come from any one of them.
This is what I assume I need to do:
Run a broker server on a separate instance
Configure each application instance to connect to that broker server
Each application instance will also be be a celery working (running
celeryd)?
My only beef with that is: What happens if my broker instance dies? Can I run 2 broker instances some how so I'm safe if one goes under?
Any tips or information on what to do in a setup like mine would be greatly appreciated. I'm sure I'm missing something or not understanding something.
For future reference, for those who do prefer to stick with RabbitMQ...
You can create a RabbitMQ cluster from 2 or more instances. Add those instances to your ELB and point your celeryd workers at the ELB. Just make sure you connect the right ports and you should be all set. Don't forget to allow your RabbitMQ machines to talk among themselves to run the cluster. This works very well for me in production.
One exception here: if you need to schedule tasks, you need a celerybeat process. For some reason, I wasn't able to connect the celerybeat to the ELB and had to connect it to one of the instances directly. I opened an issue about it and it is supposed to be resolved (didn't test it yet). Keep in mind that celerybeat by itself can only exist once, so that's already a single point of failure.
You are correct in all points.
How to make reliable broker: make clustered rabbitmq installation, as described here:
http://www.rabbitmq.com/clustering.html
Celery beat also doesn't have to be a single point of failure if you run it on every worker node with:
https://github.com/ybrs/single-beat

Alternatives to JMS for queuing

We have a REST web service that receives requests from external systems and makes updates to our DB accordingly. I'm looking to implement a caching/queuing solution for the requests that come in, as we've had some DB server challenges lately, and have lost some messages when the DB server went down.
Before I start putting together a simple persistent file-based queue, I'm wanting to see if there are any good alternatives to JMS as it's use is restricted in our environment.
Current platforms:
Jboss 4.3
Richfaces 3.3
Spring 3.0.5
RESTEasy
** UPDATES **
Per skaffman's question below, my requirements for clustering, transactions, etc.
Clustering: Our web and app servers are all clustered, so the queue(s) will need to be able to process items from all cluster nodes. However, our commits are essentially atomic, so ordering and synchronization issues are extremely minimal. Thread and cluster-safety is not really a factor. Separate/Independent queues on each cluster would be sufficient.
Transactions: Again, due to the atomic nature of our data, transactional needs are minmal/not required outside of each individual request.
Security: Moderate concern, but I would anticipate that to be handled by our regular security on the Web Service. I wouldn't anticipate anything reading or writing to the queue(s) other than the web-app itself. That would only be necessary in instances of high volume or when the DB is unavailable.
Thanks,
Mike
For one project we did use a queue (HornetQ) but was integrated in the war and deployable on a Tomcat because the customer did not want Weblogic or JBoss application servers, but if your restricting policy goes to your application architecture as well such solution would be forbidden.
For another project we did not use any JMS implementation and we make the asynchronous implementation by using a message database and the Service Activator of the spring-integration framework for consuming the events.
That way any message publisher just insert a row in a DB table and the Service Activator trigs the event and call any other service (Spring, Web-service, etc...).