Akka remote routees in scalable cluster - scala

I have an application that I want to scale so in one instance (master) I create router (created periodically depending on requests):
val executors = context.actorOf(Props(classOf[ExecutorWorker], nq).withRouter(
ClusterRouterConfig(ConsistentHashingRouter(), ClusterRouterSettings(
maxInstancesPerNode = 10,
allowLocalRoutees = true, useRole = Some("notifier")))),
name = "router")
If I now register new instance (other server) in cluseter with role "notifier" would the new router actors be created also in this new instance heap?

Yes, but you might have to define totalInstances = 1000 in the ClusterRouterSettings.

Related

Hazelcast AtomicLong Data loss When multiple member left

Hazelcast fails when multiple members disconnect from the cluster.
My scenario is so basic and my configuration has 3 bakcup option(it does not work). I have 4 members in a cluster and i use AtomicLong API to save my key->value. When all members are alive, everything is perfect. However, some data loss occurs when i kill 2 members at the same time(without waiting for a while). My member counts are always 4. is there any way to avoid this kind of data loss?
Config config = new Config();
NetworkConfig network = config.getNetworkConfig();
network.setPort(DistributedCacheData.getInstance().getPort());
config.getCacheConfig("default").setBackupCount(3);
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().setEnabled(true);
config.setNetworkConfig(network);
config.setInstanceName("member-name-here");
Thanks.
IAtomicLong has hard-coded 1 sync backup, you cannot configure it to have more than 1 backup. What you are doing is configuring Cache with 3 backups.
Below is a example demonstrates multiple member disconnect for IMap
Config config = new Config();
config.getMapConfig("myMap").setBackupCount(3);
HazelcastInstance[] instances = {
Hazelcast.newHazelcastInstance(config),
Hazelcast.newHazelcastInstance(config),
Hazelcast.newHazelcastInstance(config),
Hazelcast.newHazelcastInstance(config)
};
IMap<Integer, Integer> myMap = instances[0].getMap("myMap");
for (int i = 0; i < 1000; i++) {
myMap.set(i, i);
}
System.out.println(myMap.size());
instances[1].getLifecycleService().terminate();
instances[2].getLifecycleService().terminate();
System.out.println(myMap.size());

using jmx monitor kafka topic

I am using jmx to monitoring kafka topic.
val url = new JMXServiceURL("service:jmx:rmi:///jndi/rmi://broker1:9393/jmxrmi");
val jmxc = JMXConnectorFactory.connect(url, null);
val mbsc = jmxc.getMBeanServerConnection();
val messageCountObj = new ObjectName("kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=mytopic");
val messagesInPerSec = mbsc.getAttribute(messageCountObj,"MeanRate")
using this code I can get the MeanRate of "mytopic" on broker1.
but I have 10 brokers,how can I get the "mytopic"'s MeanRate from all my brokers?
I have try "service:jmx:rmi:///jndi/rmi://broker1:9393,broker2:9393,broker3:9393/jmxrmi"
got an error :(
It would be nice if it were that simple ;)
There's no way to do this as you outlined. You will need to make a seperate connection to each broker.
One possible solution would be to use MBeanServer Federation which would register proxies for each of your brokers in one MBeanServer, so if you did this on broker1, you could connect to service:jmx:rmi:///jndi/rmi://broker1:9393/jmxrmi and query the stats for all your brokers in one go, but you would need to query 10 different ObjectNames, query the value for each and then compute the MeanRate yourself. [Java] Pseudo code:
ObjectName wildcard = new ObjectName("*:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=mytopic");
double totalRate = 0d;
int respondingBrokers = 0;
for(ObjectName on : mbsc.queryNames(wildcard, null)) {
totalRate += (Double)mbsc.getAttribute(messageCountObj,"MeanRate");
respondingBrokers++;
}
// Average rate of mean rates: totalRate/respondingBrokers
Note: no exception handling, and I am assuming the rate type is a Double.
You could also create and register a custom MBean that computed the aggregate mean on the federated broker.
If you are maven oriented, you can build the OpenDMK from here.

routing configuration for akka cluster with remote nodes

I have several remote nodes that stay on different computers and connected in cluster.
So, it is logging system on one of the nodes with role 'logging', that write logs in db.
I chose to use routing for delivering messages to logger from other nodes.
I have one node with main actor and three child actors. Each of them must send logs to logger node.
My configuration for router:
akka.actor.deployment {
/main/loggingRouter = {
router = adaptive-group
nr-of-instances = 100
cluster {
enabled = on
routees-path = "/user/loggingEvent"
use-role = logging
allow-local-routees = on
}
}
"/main/*/loggingRouter" = {
router = adaptive-group
nr-of-instances = 100
cluster {
enabled = on
routees-path = "/user/loggingEvent"
use-role = logging
allow-local-routees = on
}
}
}
And I create router in each actor with this code
val logging = context.actorOf(FromConfig.props(), name = "loggingRouter")
And send
logging ! LogProtocol("msg")
After that logger receives messages only from one child actor. I don't know how to debug it, but my guess that I apply wrong pattern for this.
What is the best practice for this task? Thx.
Actor from logger node:
system.actorOf(Logging.props(), name = "loggingEvent")
Problems is in routers with the same names. Ass I see, good pattern is to create one router in main actor and send it to children.

CQ 5.6 Reverse replication: Replication triggered, but no agent found or selected

I'm trying to write custom file upload component for CQ5.6, but I meet the problem with reverse replication. Node created in Publish instance, but not replicated to Author instance. After replicator call, next line appears in error.log:
com.day.cq.replication.impl.ReplicatorImpl Replication triggered, but no agent found or selected.
Replication agents are turned on. In other cases, userforms for example, replication works successfully, so I'm think the problem is somewhere in my code. There is the code I use:
Node node = session.getNode(path);
ValueFactory valueFactory = session.getValueFactory();
Binary contentValue = valueFactory.createBinary(is);
Node parent = node.addNode(fileName, "nt:unstructured");
parent.setProperty(DELETED, false);
parent.setProperty(DESCRIPTION, description);
Node fileNode = parent.addNode(fileName, "nt:file");
fileNode.addMixin("mix:referenceable");
Node resNode = fileNode.addNode("jcr:content", "nt:resource");
resNode.setProperty(Property.JCR_DATA, contentValue);
Calendar lastModified = Calendar.getInstance();
lastModified.setTimeInMillis(lastModified.getTimeInMillis());
resNode.setProperty(Property.JCR_LAST_MODIFIED, lastModified);
parent.setProperty("cq:distribute", true);
parent.setProperty("cq:lastModified", Calendar.getInstance());
parent.setProperty("cq:lastModifiedBy", session.getUserID());
session.save();
replicator.replicate(session, ReplicationActionType.ACTIVATE, parent.getPath());
session.logout();
What should I do to make reverse replication works for this nodes I create in servlet?
UPDATE:
According to Tomek Rękawek answer I updated my code, but problem still not solved. Here is new code:
ResourceResolver resourceResolver = resolverFactory.getAdministrativeResourceResolver(null);
Session session = resourceResolver.adaptTo(Session.class);
String path = (String) componentContext.getProperties().get(SAVEPATH);
Node node = session.getNode(path);
ValueFactory valueFactory = session.getValueFactory();
Binary contentValue = valueFactory.createBinary(is);
Node parent = node.addNode(fileName, "cq:Page");
Node jcrContent = parent.addNode("jcr:content", "cq:PageContent");
jcrContent.setProperty("cq:distribute", true);
jcrContent.setProperty("cq:lastModified", Calendar.getInstance());
jcrContent.setProperty("cq:lastModifiedBy", session.getUserID());
Node fileNode = jcrContent.addNode(fileName, "nt:file");
fileNode.addMixin("mix:referenceable");
Node resNode = fileNode.addNode("jcr:content", "nt:resource");
resNode.setProperty(Property.JCR_DATA, contentValue);
session.save();
session.logout();
Reverse replication is action performed by the author instance, not the publish. Agent responsible for this is Reverse Replication Agent on the author. It connects to the publish every 30 seconds and gathers page nodes with cq:distribute property set.
In order to reverse replicate the image you need to:
Create cq:Page node
Create cq:PageContent node under it and name it jcr:content.
Create your image node under the jcr:content and save your session [edited]
Set cq:distribute, cq:lastModified and cq:lastModifiedBy properties on the jcr:content node.
Save the session
Sample method that creates page wrapping input stream and reverse-replicates it:
private void reverseReplicateBinary(Session session, String parentPath, String name, InputStream is)
throws RepositoryException {
ValueFactory valueFactory = session.getValueFactory();
Node parent = session.getNode(parentPath);
Node page = JcrUtils.getOrCreateUniqueByPath(parent, name, "cq:Page");
Node jcrContent = page.addNode(Property.JCR_CONTENT, "cq:PageContent");
Node file = jcrContent.addNode("file", "nt:file");
Node resource = file.addNode(Property.JCR_CONTENT, "nt:resource");
resource.setProperty(Property.JCR_DATA, valueFactory.createBinary(is));
session.save();
jcrContent.setProperty("cq:lastModified", Calendar.getInstance());
jcrContent.setProperty("cq:lastModifiedBy", session.getUserID());
jcrContent.setProperty("cq:distribute", false);
session.save();
}
Complete example can be found on the gist.
That's all. You don't need to call replicator manually, author instance will gather page automatically.
For your replicator object make sure to set the agent ID of "outbox"
AgentIdFilter filter = new AgentIdFilter("outbox");
ReplicationOptions opts = new ReplicationOptions();
opts.setFilter(filter);
replicator.replicate(session, ReplicationActionType.ACTIVATE, parent.getPath(), opts);

how to create remote actors dynamically and control them by using AKKA

what I want to do is:
1) create a master actor on a server which can dynamically create 10 remote actors on 10 different machine
2) master actor distribute the task to 10 remote actors
3) when every remote actor finish their work, they send the results to the master actor
4) master actor shut down the whole system
my problems are:
1) I am not sure how to config the master actor and below is my server part code:
class MasterAppliation extends Bootable{
val hostname = InetAddress.getLocalHost.getHostName
val config = ConfigFactory.parseString(
s"""
akka{
actor{
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/remotemaster {
router = "round-robin"
nr-of-instances = 10
target {
nodes = ["akka.tcp://remotesys#host1:2552", "akka.tcp://remotesys#host2:2552", ....... akka.tcp://remotesys#host10:2552"]
}
}
}
remote{
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp{
hostname = "$hostname"
port = 2552
}
}
}""")
val system = ActorSystem("master", ConfigFactory.load(config))
val master = system.actorOf(Props(new master), name = "master")
def dosomething = master ! Begin()
def startup() {}
def shutdown() {
system.shutdown()
}
}
class master extends Actor {
val addresses = for(i <- 1 to 10)
yield AddressFromURIString(s"akka://remostsys#host$i:2552")
val routerRemote = context.actorOf(Props[RemoteMaster].withRouter(
RemoteRouterConfig(RoundRobinRouter(12), addresses)))
def receive = {
case Begin=>{
for(i <- 1 to 10) routerRemote ! Work(.....)
}
case Result(root) ........
}
}
object project1 {
def main(args: Array[String]) {
new MasterAppliation
}
}
2) I do not know how to create a remote actor on remote client. I read this tutorial. Do I need
to write the client part similar to the server part, which means I need create an object which is responsible to create a remote actor? But that also means when I run the client part, the remote actor is already created ! I am really confused.
3) I do not how to shut down the whole system. In the above tutorial, I find there is a function named shutdown(), but I never see anyone call it.
This is my first time to write a distributed program in Scala and AKKA. So I really need your help.
Thanks a lot.
Setting up the whole thing for the first time is a pain but if you do it once you will have a good skeleton that you will user on regular basis.
I've written in comment below the question user clustering not remoting.
Here is how I do it:
I set up an sbt root project with three sub-projects.
common
frontend
backend
In common you put everything that is common to both projects e.g. the messages that they share, actor classes that are created in frontend and deployed to backend.
Put a reference.conf to common project, here is mine:
akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
debug {
lifecycle = on
}
}
cluster {
seed-nodes = [
"akka.tcp://application#127.0.0.1:2558",
"akka.tcp://application#127.0.0.1:2559"
]
}
}
Now in the frontend:
akka {
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 2558
}
}
cluster {
auto-down = on
roles = [frontend]
}
}
and the backend
akka {
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
auto-down = on
roles = [backend]
}
}
This will work like this:
You start the fronted part first which will control the cluster.
Then you can start any number of backends you want that will join automatically (look at the port, it's 0 so it will be chosen randomly).
Now you need to add the whole logic to the frontend main:
Create the actor system with name application:
val system = ActorSystem("application")
Do the same at the backend main.
Now write your code in fronted so it will create your workers with a router, here's my example code:
context.actorOf(ServiceRuntimeActor.props(serviceName)
.withRouter(
ClusterRouterConfig(ConsistentHashingRouter(),
ClusterRouterSettings(
totalInstances = 10, maxInstancesPerNode = 3,
allowLocalRoutees = false, useRole = Some("backend"))
)
),
name = shortServiceName)
just change your ServiceRuntimeActor to name of your worker. It will deploy workers to all backends that you've started and limit this to max 3 per node and max 10 in total.
Hope this will help.