Configurate nested Router in Akka - scala

I have some nested Router, which shall be created FromConfig(). What I want is something like this:
test {
akka.actor.deployment {
/worker {
router = round-robin
nr-of-instances = 5
/slave {
router = broadcast
nr-of-instances = 4
}
}
}
}
But if I run this, I get an exception, telling me, that [akka://test/user/worker/slave] needs external configuration and suggests application.conf.
The names are correct and without the nested routing it worked. What am I missing?
Edit
I tried another way to configurate:
test{
akka.actor.deployment {
/worker {
router = round-robin
nr-of-instances = 5
}
/worker/slave {
router = broadcast
nr-of-instances = 4
}
}
}
This isn't working neither. I also recognized, that the actual position of the error is not [akka://test/user/worker/slave], but [akka://test/user/worker/$a/slave]. That makes it even more weired for me. I understand, that $a is a routee, but how can I configurate it?
Edit 2
Thank you for your quick replys. For me this does not work at all because I am using scala 2.9.2 and akka 2.0. Is there a way to achieve something similar in akka 2.0?

You can use wildcard in the deployment path name: /worker/*/slave

Related

openSIPS setup an onreply route if the call is picked up

I'm wondering if is it possible to set a condition for call answered/picked up in an onreply_route
something like this
onreply_route {
if(call picked up) {
xlog("ON AIR");
}
}
There are quite a few ways in which you can achieve this. For your case, I would use the tm module's t_check_status() function:
onreply_route {
if (t_check_status("2[0-9][0-9]")) {
xlog("ON AIR");
}
}
However, note that this will not work if your SIP proxy is stateless (i.e. if you don't use tm at all)! In this case, we would need to access the information in a more low-level way, by reading it straight off the received message using the $rs variable (SIP reply status):
onreply_route {
if ($rs == 200) { # or ($rs =~ "2[0-9][0-9]")
xlog("ON AIR");
}
}

routing configuration for akka cluster with remote nodes

I have several remote nodes that stay on different computers and connected in cluster.
So, it is logging system on one of the nodes with role 'logging', that write logs in db.
I chose to use routing for delivering messages to logger from other nodes.
I have one node with main actor and three child actors. Each of them must send logs to logger node.
My configuration for router:
akka.actor.deployment {
/main/loggingRouter = {
router = adaptive-group
nr-of-instances = 100
cluster {
enabled = on
routees-path = "/user/loggingEvent"
use-role = logging
allow-local-routees = on
}
}
"/main/*/loggingRouter" = {
router = adaptive-group
nr-of-instances = 100
cluster {
enabled = on
routees-path = "/user/loggingEvent"
use-role = logging
allow-local-routees = on
}
}
}
And I create router in each actor with this code
val logging = context.actorOf(FromConfig.props(), name = "loggingRouter")
And send
logging ! LogProtocol("msg")
After that logger receives messages only from one child actor. I don't know how to debug it, but my guess that I apply wrong pattern for this.
What is the best practice for this task? Thx.
Actor from logger node:
system.actorOf(Logging.props(), name = "loggingEvent")
Problems is in routers with the same names. Ass I see, good pattern is to create one router in main actor and send it to children.

Akka Remoting Failures with Amazon EC2

I'm building a library with Akka actors in Scala to do some large-scale data crunching.
I'm running my code on Amazon EC2 spot instances using StarCluster. The program is unstable because the actor remoting sometimes drops:
While the code is running, nodes will disconnect one by one in a few minutes. The nodes say something like:
[ERROR] [07/16/2014 17:40:06.837] [slave-akka.actor.default-dispatcher-4] [akka://slave/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fslave%40master%3A2552-0/endpointWriter] AssociationError [akka.tcp://slave#node005:2552] -> [akka.tcp://slave#master:2552]: Error [Association failed with [akka.tcp://slave#master:2552]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://slave#master:2552]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: Connection refused: master
and
[WARN] [07/16/2014 17:30:05.548] [slave-akka.actor.default-dispatcher-12] [Remoting] Tried to associate with unreachable remote address [akka.tcp://slave#master:2552]. Address is now quarantined, all messages to this address will be delivered to dead letters.
Even though I can ping between the nodes just fine.
I've been trying to fix this; I've figured it's some configuration setting. The Akka remoting documentation even says,
However in cloud environments, such as Amazon EC2, the value could be
increased to 12 in order to account for network issues that sometimes
occur on such platforms.
However, I've set that and beyond and still no luck in fixing the issue. Here are my current remoting configurations:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
port = 2552
# for modelling
#send-buffer-size = 50000000b
#receive-buffer-size = 50000000b
#maximum-frame-size = 25000000b
send-buffer-size = 5000000b
receive-buffer-size = 5000000b
maximum-frame-size = 2500000b
}
watch-failure-detector.threshold = 100
acceptable-heartbeat-pause = 20s
transport-failure-detector {
heartbeat-interval = 4 s
acceptable-heartbeat-pause = 20 s
}
}
log-dead-letters = off
}
and I deploy my actors like so all from the master node:
val o2m = system.actorOf(Props(classOf[IntOneToMany], p), name = "o2m")
val remote = Deploy(scope = RemoteScope(Address("akka.tcp", "slave", args(i), 2552)))
val b = system.actorOf(Props(classOf[IntBoss], o2m).withDeploy(remote), name = "boss_" + i)
etc.
Can anyone point me to a mistake I'm making/how I can fix this problem and stop nodes from disconnecting? Alternatively, some solution of just re-launching the actors if they are disconnected also works; I don't care about dropped messages much. In fact I thought this was supposed to be easily configurable behavior but I'm finding it difficult to find the right place to look for that.
Thank you
at least the properties syntax was wrong: acceptable-heartbeat-pause should be under watch-failure-detector, (yours are at the same level). they should be like below:
watch-failure-detector {
threshold = 100
acceptable-heartbeat-pause = 20 s
}
transport-failure-detector {
heartbeat-interval = 4 s
acceptable-heartbeat-pause = 20 s
}

Jetty works for HTTP but not HTTPS

I am trying to create a jetty consumer. I am able to get it successfully running using the endpoint uri:
jetty:http://0.0.0.0:8080
However, when I modify the endpoint uri for https:
jetty:https://0.0.0.0:8443
The page times out trying to load. This seems odd because the camel documentation states it should function right out of the box.
I have since loaded a signed SSL into java's default keystore, with my attempted implementation to load it below:http://camel.apache.org/jetty.html
I have a basic Jetty instance using the akka-camel library with akka and scala. ex:
class RestActor extends Actor with Consumer {
val ksp: KeyStoreParameters = new KeyStoreParameters();
ksp.setPassword("...");
val kmp: KeyManagersParameters = new KeyManagersParameters();
kmp.setKeyStore(ksp);
val scp: SSLContextParameters = new SSLContextParameters();
scp.setKeyManagers(kmp);
val jettyComponent: JettyHttpComponent = CamelExtension(context.system).context.getComponent("jetty", classOf[JettyHttpComponent])
jettyComponent.setSslContextParameters(scp);
def endpointUri = "jetty:https://0.0.0.0:8443/"
def receive = {
case msg: CamelMessage => {
...
}
...
}
...
}
This resulted in some progress, because the page does not timeout anymore, but instead gives a "The connection was interrupted" error. I am not sure where to go from here because camel is not throwing any Exceptions, but rather failing silently somewhere (apparently).
Does anybody know what would cause this behavior?
When using java's "keytool" I did not specify an output file. It didn't throw back an error, so it probably went somewhere. I created a new keystore and explicitly imported my crt into the keyfile. I then explicitly added the filepath to that keystore I created, and everything works now!
If I had to speculate, it is possible things failed silently because I was adding the certs to jetty's general bank of certs to use if eligible, instead of explicitly binding it as the SSL for the endpoint.
class RestActor extends Actor with Consumer {
val ksp: KeyStoreParameters = new KeyStoreParameters();
ksp.setResource("/path/to/keystore");
ksp.setPassword("...");
val kmp: KeyManagersParameters = new KeyManagersParameters();
kmp.setKeyStore(ksp);
val scp: SSLContextParameters = new SSLContextParameters();
scp.setKeyManagers(kmp);
val jettyComponent: JettyHttpComponent = CamelExtension(context.system).context.getComponent("jetty", classOf[JettyHttpComponent])
jettyComponent.setSslContextParameters(scp);
def endpointUri = "jetty:https://0.0.0.0:8443/"
def receive = {
case msg: CamelMessage => {
...
}
...
}
...
}
Hopefully somebody in the future can find use for this code as a template in implementing Jetty over SSL with akka-camel (surprisingly no examples seem to exist)

how to create remote actors dynamically and control them by using AKKA

what I want to do is:
1) create a master actor on a server which can dynamically create 10 remote actors on 10 different machine
2) master actor distribute the task to 10 remote actors
3) when every remote actor finish their work, they send the results to the master actor
4) master actor shut down the whole system
my problems are:
1) I am not sure how to config the master actor and below is my server part code:
class MasterAppliation extends Bootable{
val hostname = InetAddress.getLocalHost.getHostName
val config = ConfigFactory.parseString(
s"""
akka{
actor{
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/remotemaster {
router = "round-robin"
nr-of-instances = 10
target {
nodes = ["akka.tcp://remotesys#host1:2552", "akka.tcp://remotesys#host2:2552", ....... akka.tcp://remotesys#host10:2552"]
}
}
}
remote{
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp{
hostname = "$hostname"
port = 2552
}
}
}""")
val system = ActorSystem("master", ConfigFactory.load(config))
val master = system.actorOf(Props(new master), name = "master")
def dosomething = master ! Begin()
def startup() {}
def shutdown() {
system.shutdown()
}
}
class master extends Actor {
val addresses = for(i <- 1 to 10)
yield AddressFromURIString(s"akka://remostsys#host$i:2552")
val routerRemote = context.actorOf(Props[RemoteMaster].withRouter(
RemoteRouterConfig(RoundRobinRouter(12), addresses)))
def receive = {
case Begin=>{
for(i <- 1 to 10) routerRemote ! Work(.....)
}
case Result(root) ........
}
}
object project1 {
def main(args: Array[String]) {
new MasterAppliation
}
}
2) I do not know how to create a remote actor on remote client. I read this tutorial. Do I need
to write the client part similar to the server part, which means I need create an object which is responsible to create a remote actor? But that also means when I run the client part, the remote actor is already created ! I am really confused.
3) I do not how to shut down the whole system. In the above tutorial, I find there is a function named shutdown(), but I never see anyone call it.
This is my first time to write a distributed program in Scala and AKKA. So I really need your help.
Thanks a lot.
Setting up the whole thing for the first time is a pain but if you do it once you will have a good skeleton that you will user on regular basis.
I've written in comment below the question user clustering not remoting.
Here is how I do it:
I set up an sbt root project with three sub-projects.
common
frontend
backend
In common you put everything that is common to both projects e.g. the messages that they share, actor classes that are created in frontend and deployed to backend.
Put a reference.conf to common project, here is mine:
akka {
loglevel = INFO
actor {
provider = "akka.cluster.ClusterActorRefProvider"
debug {
lifecycle = on
}
}
cluster {
seed-nodes = [
"akka.tcp://application#127.0.0.1:2558",
"akka.tcp://application#127.0.0.1:2559"
]
}
}
Now in the frontend:
akka {
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 2558
}
}
cluster {
auto-down = on
roles = [frontend]
}
}
and the backend
akka {
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
auto-down = on
roles = [backend]
}
}
This will work like this:
You start the fronted part first which will control the cluster.
Then you can start any number of backends you want that will join automatically (look at the port, it's 0 so it will be chosen randomly).
Now you need to add the whole logic to the frontend main:
Create the actor system with name application:
val system = ActorSystem("application")
Do the same at the backend main.
Now write your code in fronted so it will create your workers with a router, here's my example code:
context.actorOf(ServiceRuntimeActor.props(serviceName)
.withRouter(
ClusterRouterConfig(ConsistentHashingRouter(),
ClusterRouterSettings(
totalInstances = 10, maxInstancesPerNode = 3,
allowLocalRoutees = false, useRole = Some("backend"))
)
),
name = shortServiceName)
just change your ServiceRuntimeActor to name of your worker. It will deploy workers to all backends that you've started and limit this to max 3 per node and max 10 in total.
Hope this will help.