I'm newbie to Drools. For powerful drools fusion or timer based rule, most of them are stateful. So, an obvious issue is coming: if the server of the stateful session is down, is that possible to recover the session by Kie execution server?
For example, I start a timer(int:30s) rule, but the server that hosts the ksession is down after 15s. How to recover it?
I've read some of the blogs like:
http://mswiderski.blogspot.com/2016/04/kie-server-clustering-and-scalability.html
http://planet.jboss.org/post/unified_kie_execution_server_part_1
I've also read a little bit about the VFS Clustering in official doc. But I'm still confused about is there an easy way to achieve my case?
Thanks,
Related
I'm new to drools and I'm trying to understand when multiple kieSessions should be used in a drools project.
I did not manage to find much on this topic in the documentation other than:
"You could decide to create multiple sessions ... if you need multiple
sessions for scalability reasons."
I'm not quite sure what scalability refers to here. Is it about the number of facts inserted in the kie session? or is it about the number of rules? Or is it simply about running the same project but for different clients by assigning to each client 1 kie Session?
trying to understand when multiple kieSessions should be used in a drools project
Stateful sessions require separate session per client for repeated requests (stateful means the session keeps the data); stateless sessions do not.
I'm writing an event-driven architecture in Scala and I need to manage a database using it.
I was wondering if using JDBC, which only supports synchronous calls, would be a good solution to my problem?
I'd thought of writing an asynchronous wrapper for the calls to JDBC, but will it really tackle my concerns of the thread being blocked because of the database call?
This is really good question, and actually there's no single good answer to it.
It really depends on your database, its protocol and driver implementation. First of all, some databases, e.g. Cassandra, have asynchronous capabilities built into the protocol level. Should make it easier to work in the event-driven model, right? Not exactly - if you get gigabytes of data over slow connection you may block on the network level.
Other databases have only synchronous protocol and thus can block your resources, right? Not exactly - there are connection pools that prevent some of the issues with blocking.
So, depending on your application architecture and data you're accessing, you may need to isolate a layer of data access, that will wrap the JDBC connections and provide asynchronous capabilities. This layer would scale up and down depending on availability of open connections (e.g. actors holding the connection, and supervisor that will spawn new DB connection actors if there are no free databases, and create new threads using PinnedDispatcher).
In other cases, with specific drivers, you may stick with just JDBC wrapped into Future an hope that driver does it magic for you.
If you build large-scale application, you may want to even separate persistence access logic completely into, say, RabbitMQ, and use RPC for accessing the database.
As far as I know JDBC drivers are synchronous. Maybe you can design your system so your "main" actors asynchronously dispatch requests to background "JDBC" actors that deal with the JDBC driver?
I also try to implement fully asnyc architecture and DB calls are my bottleneck. I found that good idea is to use connection polled JDBC C3P0 driver. Here is usage example in Scala Slick framework link. Connection pooling is one of the solutions in which you have ready to use connections to your DB. It's better than spawning new connection for each request and after complition removing that connection. Here is full website od C3P0 project link
I understand that rest ws is stateless. And we are expecting pretty high traffic. Is it a good idea to set session timeout (we are using tomcat) really low? Like one minute? pros and cons?
If you are expecting high traffic, session management will bring overhead to your application and with a one minute timeout your server will consume time invalidating lots of sessions.
If your application is indeed stateless then don't use sessions. You can't fully disable them either but if you don't do getSession() then you should be fine.
If you (absolutely) want to be sure no code is creating sessions, you could have a look at Tomcat's session manager component and maybe create your own implementation that tells you what's happening when you subject your server to some stress tests.
How do you reload an application's configuration? Or, what are good strategies for managing dynamic application configuration?
For example, let's say I had log levels and I wanted to change them at runtime. Also, let's assume this is one of many such options. Does it make sense to have a "configuration server" that holds configuration state for other parts of the application to query? Do people do that or did I just make it up?
I believe it's reasonable to keep all your configuration data in a repository (subversion, mercurial etc.) and have applications download it every time they start or attempt to reload some their configuration options. This is centralized approach — however you could have many configuration servers to avoid SPOF — and it:
allows you to keep track of changes so that you
know who put these and when (s)he did
that (none wants to be in charge of
unproper configuration);
enables you to use the same configuration for
all applications throughout you
network;
easiness of changes: you can just modify
configuration and notify concerned applications
using gen_server:abcast call or other means.
proplists(3) are useful when reading configuration.
If my understanding is correct, the problem is the following:
You want to create a distributed, scalable system and of course Erlang is the first choice that comes into mind, since it was designed for such purposes.
You will have several nodes that will be running local applications and also distributed applications as well.
Here the simplest hierarchy is to have a hot-standby backup for every major functionality.
This can be achieved by implementing a distributed application controller.
Simplest example is to have a server start on a node, while a slave server is started simultaneously on a mate node.
Distributed Application controllers have many advantages.
Easy example is to handle node_up messages differently by introducing new messages that indicate that a node is not only erlang VM ready, but all vital applications are running. This way the mate node can be sure that the stand-by node is ready and can start sync-ing.
Please elaborate or comment if I misunderstood something.
Good luck!
From what I understand, in order to achieve MSMQ load-balancing, one must use a technology such as NLB.
And in order to achieve MSMQ high-availability, one must cluster the related Biztalk Host (and hence the underlying servers have to be in a cluster themselves).
Yet, according to Microsoft Documentation, NLB and FailOver Clustering technologies are not compatible. See this link for reference: http://support.microsoft.com/kb/235305
Can anyone PLEASE explain to me how MSMQ load-balancing and high-availability can be achieved?
thank you in advance,
M
I've edited my original answer because on reflection, I think I was talking nonsense.
I don't believe that it is possible to achieve both load balancing and high availability in a BizTalk transactional scenario. Have a look at the section "Migration considerations for moving from MSMQ/T to MSMQ adapter in BizTalk 2006" on the following site http://blogs.msdn.com/eldarm/
To summarise that post, there are a couple of scenarios:
High Availability (Non-transactional)
You simply have MSMQ on more than one BizTalk server behind NLB
High Availability (Transactional)
For this you need to have a clustered MSMQ host, which means that you can't do any sort of load balancing upon a single queue.
One possible halfway solution is to create two MSMQ adapters, on different clustered hosts, each handling different queues. Doesn't sound too nice to me though.
A key point is understanding the reasons why you would want transactional, clustered behaviour - you need this for ordered delivery and to ensure no duplicates.
In general I wouldn't go to the trouble of load balancing MSMQ - BizTalk itself is load balanced once messages have reached the MessageBox database. While it is true that you will see asymmetric load due to the queue processing happening on one machine, in the overall context of your BizTalk environment this should not be significant.
Again, it is worth remembering that you are clustering MSMQ for reasons beyond simple high availability:
MSMQ adapter receive handler - MSMQ does not support remote
transactional reads; only local
transactional reads are supported. The
MSMQ adapter receive handler must run
in a host instance that is local to
the clustered MSMQ service in order to
complete local transactional reads
with the MSMQ adapter.
That was from the following MSDN page.
I hope this edited answer helps - I don't think it was what you were after, maybe I'm wrong and you'll find a workable solution for NLB and transactional MSMQ, but the more I think about it the more it seems that the two scenarios are not compatible.
A final thought is that you could try posting a similar question on Server Fault - you get a few BizTalk devs on Stack Overflow, including at least two MVPs, but at least where I work this is that sort of question I'd be passing on to my networking team.