I would like configure Akka to use remote actors with redis durable mailbox, like below.
common.conf file:
akka {
actor {
mailbox {
redis {
hostname = "127.0.0.1"
port = 6379
}
}
provider = "akka.remote.RemoteActorRefProvider" }
remote {netty {hostname = "127.0.0.1" }}
}
and my application.conf file:
calculatorActor {include "common"}
remotecreation {
include "common"
akka {
actor {
deployment {
/advancedCalculator {
router = "round-robin"
nr-of-instances = 200
target {
nodes = ["akka://CalculatorApplication#127.0.0.1:2552"]
}
}
}
}
remote.netty.port = 2554
}
}
This is configuration is derived from akka-sample-remote. When I run the application, I don't see any connections ever made to the redis side (durable mailbox!). Redis logs only contains:
0 clients connected (0 slaves)
You must specify a dispatcher with the correct mailbox type.
from the docs:
my-dispatcher {
mailbox-type = akka.actor.mailbox.RedisBasedMailboxType
}
and then create your actor with this dispatcher:
val myActor = system.actorOf(Props[MyActor].withDispatcher("my-dispatcher"), name = "myactor")
Related
I just contacted akka and released an akka actor after
akka {
log-dead-letters = on
jvm-exit-on-fatal-error = on
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = 172.24.0.162
port = 9998
}
}
}
This is the only way to call remotely
path= s"akka. tcp://cliActorSystem #172.24.0.162:9998/user/"
actorSystem.actorSelection(path)
But I can't expose the server now. I have to use nginx to map to a domain name and a new port
akka. conf
akka {
log-dead-letters = on
jvm-exit-on-fatal-error = on
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
# hostname = 172.24.0.162
# port = 9998
hostname = akka.hostname.com # external (logical) hostname
port = 8000 # external (logical) port
bind-hostname = 172.24.0.162 # internal (bind) hostname
bind-port = 9998 # internal (bind) port
}
}
}
Try to use the above configuration, but it can't meet expectations.
In addition, adding the following configuration ports in nginx can not forward,
stream {
upstream akka_ proxy {
hash $remote_ addr consistent;
server 127.0.0.1:9998 ;
}
server {
listen 9990;
proxy_ connect_ timeout 5s;
proxy_ timeout 20s;
proxy_ pass akka_ proxy;
}
}
[ERROR] [09/13/2022 16:59:37.916] [cliActorSystem-akka.remote.default-remote-dispatcher-5] [akka://cliActorSystem/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FClientActorSystem%40127.0.0.1%3A60761-1/endpointWriter] dropping message [class akka.actor.ActorSelectionMessage] for non-local recipient [Actor[akka.tcp://cliActorSystem#127.0.0.1:9990/]] arriving at [akka.tcp://cliActorSystem#127.0.0.1:9990] inbound addresses are [akka.tcp://cliActorSystem#127.0.0.1:9998]
I look forward to your help, thank you!
I have a scala project that uses akka-http. It is connected to a couchbase database and I want to be able to suppress some of the logs showing up in the terminal such as :
16:52:00.970 [cb-computations-1] DEBUG com.couchbase.client.core.node.Node - [127.0.0.1/localhost]: Adding Service VIEW
My application.conf looks like this:
akka {
log-config-on-start = off
stdout-loglevel = INFO
loggers = ["akka.event.slf4j.Slf4jLogger"]
default-dispatcher {
fork-join-executor {
parallelism-min = 8
}
}
test {
timefactor = 1
}
}
http {
host = "0.0.0.0"
host = ${?HOST}
port = 4000
port = ${?PORT}
}
I've tried changing it but nothing works!
We are creating simple akka cluster sample and follow Akka In Action book. we are creating 3 seed nodes as mention in following code:
akka {
loglevel = INFO
stdout-loglevel = INFO
event-handlers = ["akka.event.Logging$DefaultLogger"]
log-dead-letters = 0
log-dead-letters-during-shutdown = off
actor {
provider = cluster
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
hostname = ${?HOST}
port = ${?PORT}
}
}
cluster {
seed-nodes = [
"akka.tcp://words#127.0.0.1:2551",
"akka.tcp://words#127.0.0.1:2552",
"akka.tcp://words#127.0.0.1:2553"
]
roles = ["seed"]
role {
seed.min-nr-of-members = 1
}
}
}
Actor system code:
object Launcher extends App {
val seedConfig = ConfigFactory.load("seed")
val seedSystem = ActorSystem("words", seedConfig)
}
When start actor system from one terminal, the seed node load but according to logs, port 2552 is used but my expectations are 2551. After opening second terminal when trying to run again actor system, I am facing following exception:
[ERROR] [03/10/2017 15:51:45.245] [words-akka.remote.default-remote-dispatcher-13] [NettyTransport(akka://words)] failed to bind to /127.0.0.1:2552, shutting down Netty transport
[error] (run-main-0) org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport.$anonfun$listen$1(NettyTransport.scala:417)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
This error is generate because, in first terminal our port 2552 is used but not sure why second time same port is use. Our assumptions are, maybe configurations is not loaded. So, how we can fix this ?
I would be grateful for some help with this Akka configuration question. This is probably something obvious, but I've been googling and looking at it all morning and have not yet come up with an answer.
My ActorSystem is created thus:
val config = ConfigFactory.load("akka.conf")
implicit val system = ActorSystem("fubar", config)
The classloader sees my configuration file since changes to logging levels take effect.
After creating the ActorSystem, my code does the following:
val mpr = system.actorOf(Props[ProcessingFlow])
In the ProcessingFlow the following:
val actorSink: Sink[PackedRecord, ActorRef] = Sink.actorSubscriber(Props[RCStreamReceiver])
In RCStreamReceiver the following:
val logWorkers = context.actorOf(FromConfig.props(Props[LogWorker]), "logWorker")
Yet, I get the following:
Configuration missing for router [akka://fubar/user/$a/StreamSupervisor-0/flow-3-2-actorSubscriberSink/logWorker] in 'akka.actor.deployment' section.
This in spite of my config file looking like this:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "WARNING"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
logger-startup-timeout = 15s
actor {
provider = "akka.cluster.ClusterActorRefProvider"
debug {
log-config-on-start = off
autoreceive = off
lifecycle = off
unhandled = on
event-stream = off
}
deployment {
default-dispatcher {
fork-join-executor {
parallelism-min = 8
}
}
/"*"/logWorker = {
router = balancing-pool
nr-of-instances = 5
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-min = 5
core-pool-size-max = 5
}
}
/"*"/editWorker = {
router = balancing-pool
nr-of-instances = 5
routees.paths = ["/foo/editWorker"]
}
/"*"/unknownWorker = {
router = balancing-pool
nr-of-instances = 5
routees.paths = ["/foo/unknownWorker"]
}
}
}
}
I was having trouble with initializing an actor system on a remote IP address. I am using akka actors and the play! framework. The code and the remote actor system are both on the remote rackspace servers. When I try to create a remote actor system on the the other server it fails to bind to that IP address. I don't think it is a network or firewall issue because Rackspace says that they opened up all connections between servers. This is the error message I am getting:
[error] application -
! Internal server error, for request [POST /payment/] ->
java.lang.ExceptionInInitializerError: null
at Routes$$anonfun$routes$1$$anonfun$apply$3$$anonfun$apply$4.apply(routes_routing.scala:44) ~[classes/:na]
at Routes$$anonfun$routes$1$$anonfun$apply$3$$anonfun$apply$4.apply(routes_routing.scala:44) ~[classes/:na]
at play.core.Router$HandlerInvoker$$anon$3.call(Router.scala:1080) ~[play_2.9.1-2.0.1.jar:2.0.1]
at play.core.Router$Routes$class.invokeHandler(Router.scala:1255) ~[play_2.9.1-2.0.1.jar:2.0.1]
at Routes$.invokeHandler(routes_routing.scala:14) ~[classes/:na]
at Routes$$anonfun$routes$1$$anonfun$apply$3.apply(routes_routing.scala:44) ~[classes/:na]
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /172.17.100.232:2554
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:298) ~[netty-3.3.0.Final.jar:na]
at akka.remote.netty.NettyRemoteServer.start(Server.scala:53) ~[akka-remote-2.0.2.jar:2.0.2]
at akka.remote.netty.NettyRemoteTransport.start(NettyRemoteSupport.scala:73) ~[akka-remote-2.0.2.jar:2.0.2]
at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:95) ~[akka-remote-2.0.2.jar:2.0.2]
at akka.actor.ActorSystemImpl._start(ActorSystem.scala:568) ~[akka-actor-2.0.2.jar:2.0.2]
at akka.actor.ActorSystemImpl.start(ActorSystem.scala:575) ~[akka-actor-2.0.2.jar:2.0.2]
Caused by: java.net.BindException: Cannot assign requested address
at sun.nio.ch.Net.bind(Native Method) ~[na:1.6.0_26]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126) ~[na:1.6.0_26]
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) ~[na:1.6.0_26]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.bind(NioServerSocketPipelineSink.java:140) ~[netty-3.3.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleServerSocket(NioServerSocketPipelineSink.java:92) ~[netty-3.3.0.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:66) ~[netty-3.3.0.Final.jar:na]
I am creating the remote actor system here:
object Payment extends Controller {
var goodies: AuthNetActorObject = null
val system = ActorSystem("RemoteCreation", ConfigFactory.load.getConfig("remotecreation"))
val myActor = system.actorOf(Props[authNetActor.AuthNetActorMain], name = "remoteActor")
...
}
Here is where i define remotecreation in my Application.conf file:
remotecreation{ #user defined name for the configuration
include "common"
akka {
actor{
#serializer{
# proto = "akka.serialization.ProtobufSerializer"
# daemon-create = "akka.serialization.DaemonMsgCreateSerializer"
#}
#serialization-bindings{
# "com.google.protobuf.GeneratedMessage" = proto
# "akka.remote.DaemonMsgCreate" = daemon-create
#}
deployment{
/remoteActor{ #Specifically has to be the name of the remote actor
remote="akka://RemoteCreation#172.17.100.232:2554"
# router = "round-robin"
# nr-of-instances = 10
# target {
# nodes = ["akka://RemoteCreation#172.17.100.232:2554", "akka://RemoteCreation#172.17.100.224:2554"]
# }
}
}
}
}
}
Here is my common.conf file that I include in the definition:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-received-messages = on
log-sent-messages = on
log-remote-lifecycle-events = on
netty {
hostname = "172.17.100.232" #Broadcast IP address of remote system
port = 2554
log-received-messages = on
log-sent-messages = on
log-remote-lifecycle-events = on
}
}
}
It probably means that something else is using that port on that machine. Log into that machine and run netstat -anp | grep 2554 as root and see that port is in LISTEN status. If so the process ID will be in the last column.