scala- how to subscribe akka leader up event - scala

I am using akka with play. And I want something to be done only once when a new leader is up
I am going to find something like , in other words, I am looking for something like this.
class LeaderUpHook {
def onLeaderUp {
log.log("a new leader is up")
}
}
I search the clustering doc , but still don't how to do

You should be able to use the cluster events to figure this out. I'm basing my code example off of the documentation under the Subscribe to Cluster Events section of the docs here. So in short, you basically create an actor that subscribes into relevant cluster events in order to determine who the leader is and when that leader is up. That actor could look like this:
import akka.actor._
import akka.cluster._
class LeaderUpHandler extends Actor{
import ClusterEvent._
val cluster = Cluster(context.system)
cluster.subscribe(self, classOf[MemberUp])
cluster.subscribe(self, classOf[LeaderChanged])
var leader:Option[Address] = None
def receive = {
case state:CurrentClusterState =>
println(s"Got current state: $state")
case MemberUp(member) =>
println(s"member up: $member")
leader.filter(_ == member.address) foreach{ address =>
println("leader is now up...")
}
case LeaderChanged(address) =>
println(s"leader changed: $address")
leader = address
}
}
Then to test this code, you could do the following:
val cfg = """
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
cluster {
seed-nodes = [
"akka.tcp://clustertest#127.0.0.1:2552"
]
auto-down = on
}
}
"""
val config = ConfigFactory.parseString(cfg).withFallback(ConfigFactory.load)
val system = ActorSystem("clustertest", config)
system.actorOf(Props[LeaderUpHandler])
When you run the above code, you should see the leader get determined to be up. This is an oversimplified example; I'm just trying to show that you can use the cluster events to find what you are looking for.

Related

Kafka Connect using REST API with Strimzi with kind: KafkaConnector

I'm trying to use Kafka Connect REST API for managing connectors, for simplicity consider the following pause implementation:
def pause(): Unit = {
logger.info(s"pause() Triggered")
val response = HttpClient.newHttpClient.send({
HttpRequest
.newBuilder(URI.create(config.connectUrl + s"/connectors/${config.connectorName}/pause"))
.PUT(BodyPublishers.noBody)
.timeout(Duration.ofMillis(config.timeout.toMillis.toInt))
.build()
}, BodyHandlers.ofString)
if (response.statusCode() != HTTPStatus.Accepted) {
throw new Exception(s"Could not pause connector: ${response.body}")
}
}
Since I'm using KafkaConnector as a resource, I cannot use Kafka Connect REST API because the connector operator has the KafkaConnetor resources as its single source of truth, manual changes such as pause made directly using the Kafka Connect REST API are reverted by the Cluster Operator.
So to pause the connector I need to edit the resource in some way.
I'm struggling to change the logic of the current function, It will be great to have some practical examples of how to handle KafkaConnetor resources.
I check out the Using Strimzi doc but couldn't find any practical example
Thanks!
After help from #Jakub i managed to create my new client:
class KubernetesService(config: Configuration) extends StrictLogging {
private[this] val client = new DefaultKubernetesClient(Config.autoConfigure(config.connectorContext))
def setPause(pause: Boolean): Unit = {
logger.info(s"[KubernetesService] - setPause($pause) Triggered")
val connector = getConnector()
connector.getSpec.setPause(pause)
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).replace(connector)
Crds.kafkaConnectorOperation(client)
.inNamespace(config.connectorNamespace)
.withName(config.connectorName)
.waitUntilCondition(connector => {
connector != null &&
connector.getSpec.getPause == pause && {
val desiredState = if (pause) "Paused" else "Running"
connector.getStatus.getConditions.stream().anyMatch(_.getType.equalsIgnoreCase(desiredState))
}
}, config.timeout.toMillis, TimeUnit.MILLISECONDS)
}
def delete(): Unit = {
logger.info(s"[KubernetesService] - delete() Triggered")
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).delete
Crds.kafkaConnectorOperation(client)
.inNamespace(config.connectorNamespace)
.withName(config.connectorName)
.waitUntilCondition(_ == null, config.timeout.toMillis, TimeUnit.MILLISECONDS)
}
def create(oldKafkaConnect: KafkaConnector): Unit = {
logger.info(s"[KubernetesService] - create(${oldKafkaConnect.getMetadata}) Triggered")
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).create(oldKafkaConnect)
Crds.kafkaConnectorOperation(client)
.inNamespace(config.connectorNamespace)
.withName(config.connectorName)
.waitUntilCondition(connector => {
connector != null &&
connector.getStatus.getConditions.stream().anyMatch(_.getType.equalsIgnoreCase("Running"))
}, config.timeout.toMillis, TimeUnit.MILLISECONDS)
}
def getConnector(): KafkaConnector = {
logger.info(s"[KubernetesService] - getConnector() Triggered")
Try {
Crds.kafkaConnectorOperation(client).inNamespace(config.connectorNamespace).withName(config.connectorName).get
} match {
case Success(connector) => connector
case Failure(_: NullPointerException) => throw new NullPointerException(s"Failure on getConnector(${config.connectorName}) on ns: ${config.connectorNamespace}, context: ${config.connectorContext}")
case Failure(exception) => throw exception
}
}
}
To pause the connector, you can edit the KafkaConnector resource and set the pause field in .spec to true (see the docs). There are several options how you can do it. You can use kubectl and either apply the new YAML from file (kubectl apply) or do it interactively using kubectl edit.
If you want to do it programatically, you will need to use a Kubernetes client to edit the resource. In Java, you can also use the api module of Strimzi which has all the structures for editing the resources. I put together a simple example for pausing the Kafka connector in Java using the Fabric8 Kubernetes client and the api module:
package cz.scholz.strimzi.api.examples;
import io.fabric8.kubernetes.client.DefaultKubernetesClient;
import io.fabric8.kubernetes.client.KubernetesClient;
import io.fabric8.kubernetes.client.dsl.MixedOperation;
import io.fabric8.kubernetes.client.dsl.Resource;
import io.strimzi.api.kafka.Crds;
import io.strimzi.api.kafka.KafkaConnectorList;
import io.strimzi.api.kafka.model.KafkaConnector;
public class PauseConnector {
public static void main(String[] args) {
String namespace = "myproject";
String crName = "my-connector";
KubernetesClient client = new DefaultKubernetesClient();
MixedOperation<KafkaConnector, KafkaConnectorList, Resource<KafkaConnector>> op = Crds.kafkaConnectorOperation(client);
KafkaConnector connector = op.inNamespace(namespace).withName(crName).get();
connector.getSpec().setPause(true);
op.inNamespace(namespace).withName(crName).replace(connector);
client.close();
}
}
(See https://github.com/scholzj/strimzi-api-examples for the full project)
I'm not a Scala users - but I assume it should be usable from Scala as well, but I leave rewriting it from Java to Scala to you.

Colossus Background Task

I am building an application using Tumblr's new Colossus framework (http://tumblr.github.io/colossus/). There is still limited documentation on it (and the fact that I'm still very new to Akka doesn't help), so I was wondering if someone could chime in on whether my approach is correct.
The application is simple and consists of two key components:
A thin web service layer that will queue tasks into Redis
A background worker which will poll the same Redis instance for available tasks and process them as they become available
I made a simple example to demonstrate that my concurrency model will work (and it does), which I posted below. However, I would like to make sure that there is not a more idiomatic way to do this.
import colossus.IOSystem
import colossus.protocols.http.Http
import colossus.protocols.http.HttpMethod.Get
import colossus.protocols.http.UrlParsing._
import colossus.service.{Callback, Service}
import colossus.task.Task
object QueueProcessor {
implicit val io = IOSystem() // Create separate IOSystem for worker
Task { ctx =>
while(true) {
// Below code is for testing purposes only. This is where the Redis loop will live, and will use a blocking call to get the next available task
Thread.sleep(5000)
println("task iteration")
}
}
def ping = println("starting") // Method to launch this processor
}
object Main extends App {
implicit val io = IOSystem() // Primary IOSystem for the web service
QueueProcessor.ping // Launch worker
Service.serve[Http]("app", 8080) { ctx =>
ctx.handle { conn =>
conn.become {
case req#Get on Root => Callback.successful(req.ok("Here"))
// The methods to add tasks to the queue will live here
}
}
}
}
I tested the above model and it works. The background loop continues running while the service happily accepts requests. But, I think that there might be a better way to do this with workers (nothing found in documentation), or perhaps Akka Streams?
I got it working with something that seems semi-idiomatic to me. However, new answers & feedback are still welcomed!
class Processor extends Actor {
import scala.concurrent.ExecutionContext.Implicits.global
override def receive = {
case "start" => self ! "next"
case "next" => {
Future {
blocking {
// Blocking call here to wait on Redis (BRPOP/BLPOP)
self ! "next"
}
}
}
}
}
object Main extends App {
implicit val io = IOSystem()
val processor = io.actorSystem.actorOf(Props[Processor])
processor ! "start"
Service.serve[Http]("app", 8080) { ctx =>
ctx.handle { conn =>
conn.become {
// Queue here
case req#Get on Root => Callback.successful(req.ok("Here\n"))
}
}
}
}

how to create remote actors dynamically and control them by using AKKA

what I want to do is:
1) create a master actor on a server which can dynamically create 10 remote actors on 10 different machine
2) master actor distribute the task to 10 remote actors
3) when every remote actor finish their work, they send the results to the master actor
4) master actor shut down the whole system
my problems are:
1) I am not sure how to config the master actor and below is my server part code:
class MasterAppliation extends Bootable{
val hostname = InetAddress.getLocalHost.getHostName
val config = ConfigFactory.parseString(
s"""
akka{
actor{
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/remotemaster {
router = "round-robin"
nr-of-instances = 10
target {
nodes = ["akka.tcp://remotesys#host1:2552", "akka.tcp://remotesys#host2:2552", ....... akka.tcp://remotesys#host10:2552"]
}
}
}
remote{
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp{
hostname = "$hostname"
port = 2552
}
}
}""")
val system = ActorSystem("master", ConfigFactory.load(config))
val master = system.actorOf(Props(new master), name = "master")
def dosomething = master ! Begin()
def startup() {}
def shutdown() {
system.shutdown()
}
}
class master extends Actor {
val addresses = for(i <- 1 to 10)
yield AddressFromURIString(s"akka://remostsys#host$i:2552")
val routerRemote = context.actorOf(Props[RemoteMaster].withRouter(
RemoteRouterConfig(RoundRobinRouter(12), addresses)))
def receive = {
case Begin=>{
for(i <- 1 to 10) routerRemote ! Work(.....)
}
case Result(root) ........
}
}
object project1 {
def main(args: Array[String]) {
new MasterAppliation
}
}
2) I do not know how to create a remote actor on remote client. I read this tutorial. Do I need
to write the client part similar to the server part, which means I need create an object which is responsible to create a remote actor? But that also means when I run the client part, the remote actor is already created ! I am really confused.
3) I do not how to shut down the whole system. In the above tutorial, I find there is a function named shutdown(), but I never see anyone call it.
This is my first time to write a distributed program in Scala and AKKA. So I really need your help.
Thanks a lot.
Setting up the whole thing for the first time is a pain but if you do it once you will have a good skeleton that you will user on regular basis.
I've written in comment below the question user clustering not remoting.
Here is how I do it:
I set up an sbt root project with three sub-projects.
common
frontend
backend
In common you put everything that is common to both projects e.g. the messages that they share, actor classes that are created in frontend and deployed to backend.
Put a reference.conf to common project, here is mine:
akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
debug {
lifecycle = on
}
}
cluster {
seed-nodes = [
"akka.tcp://application#127.0.0.1:2558",
"akka.tcp://application#127.0.0.1:2559"
]
}
}
Now in the frontend:
akka {
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 2558
}
}
cluster {
auto-down = on
roles = [frontend]
}
}
and the backend
akka {
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
auto-down = on
roles = [backend]
}
}
This will work like this:
You start the fronted part first which will control the cluster.
Then you can start any number of backends you want that will join automatically (look at the port, it's 0 so it will be chosen randomly).
Now you need to add the whole logic to the frontend main:
Create the actor system with name application:
val system = ActorSystem("application")
Do the same at the backend main.
Now write your code in fronted so it will create your workers with a router, here's my example code:
context.actorOf(ServiceRuntimeActor.props(serviceName)
.withRouter(
ClusterRouterConfig(ConsistentHashingRouter(),
ClusterRouterSettings(
totalInstances = 10, maxInstancesPerNode = 3,
allowLocalRoutees = false, useRole = Some("backend"))
)
),
name = shortServiceName)
just change your ServiceRuntimeActor to name of your worker. It will deploy workers to all backends that you've started and limit this to max 3 per node and max 10 in total.
Hope this will help.

Is there a way to obtain the ip address of a remote actor?

I was wondering if there was a way to obtain the ip address of a remote akka actor because currently I am using a broadcast ip address to set up the actor system (this is the only one that seems to work). I currently am able to see the actor path, but would like to know exactly what machine it resides since the broadcast ip is for the network.
Here is some of my code for the master actor:
# remote actor config
remotecreation{ #user defined name for the configuration
include "common"
akka {
actor{
deployment{
/remoteActor{ #Specifically has to be the name of the remote actor
router = "round-robin"
nr-of-instances = 10
target {
nodes = ["akka://RemoteCreation#172.17.100.232:2554", "akka://RemoteCreation#172.17.100.224:2554"]
}
}
}
}
remote.netty.port = 2554
}
remote{
log-received-messages = on
log-sent-messages = on
}
}
Here is the Master definition:
class Master(goodies: AuthNetActorObject) extends Actor {
var start: Long = _ // helps measure the calculation time
def receive = {
case "start" => {
System.out.println("Master Started!")
System.out.println("The master is at: " + self.path.toString())
...
}
}
}
Here is where I initialize and bang the actor:
object Payment extends Controller {
var goodies: AuthNetActorObject = null
val system = ActorSystem("RemoteCreation",ConfigFactory.load.getConfig("remotecreation"))
...
val master = system.actorOf(Props(new Master(actorObject)))
master ! "start"
...
}
Use actorRef.path.address.host

Possible to make a producer/consumer obj in Scala using actors 'threadless' (no receive...)?

So I want to write some network code that appears to be blocking, without actually blocking a thread. I'm going to send some data out on the wire, and have a 'queue' of responses that will come back over the network. I wrote up a very simple proof of concept, inspired by the producer/consumer example on the actor tutorial found here: http://www.scala-lang.org/node/242
The thing is, using receive appears to take up a thread, and so I'm wondering if theres anyway to not take up a thread and still get the 'blocking feel'. Heres my code sample:
import scala.actors.Actor._;
import scala.actors.Actor;
case class Request(val s:String);
case class Message(val s:String);
class Connection {
private val act:Actor = actor {
loop {
react {
case m:Message => receive { case r:Request => reply { m } }
}
}
}
def getNextResponse(): Message = {
return (act !? new Request("get")).asInstanceOf[Message];
}
//this would call the network layer and send something over the wire
def doSomething() {
generateResponse();
}
//this is simulating the network layer getting some data back
//and sending it to the appropriate Connection object
private def generateResponse() {
act ! new Message("someData");
act ! new Message("moreData");
act ! new Message("even more data");
}
}
object runner extends Application {
val conn = new Connection();
conn.doSomething();
println( conn.getNextResponse());
println(conn.getNextResponse());
println(conn.getNextResponse());
}
Is there a way to do this without using the receive, and thereby making it threadless?
You could directly rely on react which should block and release the thread:
class Connection {
private val act:Actor = actor {
loop {
react {
case r:Request => reply { r }
}
}
}
[...]
I expect that you can use react rather than receive and not have actors take up threads like receive does. There is thread on this issue at receive vs react.