docker akka and scala, app start and stop just after without reason - scala

I created a scala App using Akka.
When i run it with : scala /Statistics.jar ./server.conf it works perfectly.
but if I put it in a server or if I put it in a docker image, when i launch it the app start then stop directly after.
Here are my logs :
...
[DEBUG] [02/01/2017 10:26:13.198] [webserver-akka.actor.default-dispatcher-11] [EventStream(akka://version)] logger log1-Slf4jLogger started
[DEBUG] [02/01/2017 10:26:13.185] [subscription-akka.actor.default-dispatcher-4] [akka://subscription/system] now supervising Actor[akka://subscription/system/log1-Slf4jLogger#731298157]
[DEBUG] [02/01/2017 10:26:13.201] [subscription-akka.actor.default-dispatcher-4] [akka://subscription/system] now watched by Actor[akka://subscription/]
[DEBUG] [02/01/2017 10:26:13.202] [webserver-akka.actor.default-dispatcher-11] [EventStream(akka://version)] Default Loggers started
[DEBUG] [02/01/2017 10:26:13.193] [version-akka.actor.default-dispatcher-2] [akka://version/system] now supervising Actor[akka://version/system/log1-Slf4jLogger#1075717908]
[DEBUG] [02/01/2017 10:26:13.206] [session-akka.actor.default-dispatcher-3] [akka://session/system/UnhandledMessageForwarder] started (akka.event.LoggingBus$$anonfun$startDefaultLoggers$2$$anon$3#1b5d5357)
[DEBUG] [02/01/2017 10:26:13.205] [webserver-akka.actor.default-dispatcher-13] [EventStream(akka://session)] Default Loggers started
[DEBUG] [02/01/2017 10:26:13.205] [webserver-akka.actor.default-dispatcher-8] [EventStream(akka://subscription)] Default Loggers started
[DEBUG] [02/01/2017 10:26:13.206] [subscription-akka.actor.default-dispatcher-4] [akka://subscription/system/UnhandledMessageForwarder] started (akka.event.LoggingBus$$anonfun$startDefaultLoggers$2$$anon$3#51839748)
[DEBUG] [02/01/2017 10:26:13.211] [subscription-akka.actor.default-dispatcher-2] [akka://subscription/system] now supervising Actor[akka://subscription/system/UnhandledMessageForwarder#2056207919]
serv http launch
[DEBUG] [02/01/2017 10:26:15.906] [webserver-akka.actor.default-dispatcher-8] [EventStream] shutting down: StandardOutLogger started
[DEBUG] [02/01/2017 10:26:15.909] [webserver-akka.actor.default-dispatcher-8] [EventStream] all default loggers stopped
[DEBUG] [02/01/2017 10:26:15.909] [webserver-akka.actor.default-dispatcher-3] [akka://webserver/system/log1-Slf4jLogger] stopped
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-7] [akka://webserver/system/IO-TCP] stopping
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-12] [akka://webserver/system/UnhandledMessageForwarder] stopped
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-16] [akka://webserver/system/IO-TCP/selectors] stopping
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-8] [akka://webserver/system] stopping
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-3] [akka://webserver/system/deadLetterListener] stopped
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-18] [akka://webserver/system/IO-TCP/selectors/$a] no longer watched by Actor[akka://webserver/system/IO-TCP/selectors#1324110335]
[DEBUG] [02/01/2017 10:26:15.910] [webserver-akka.actor.default-dispatcher-13] [akka://webserver/system/eventStreamUnsubscriber-1] stopped
[DEBUG] [02/01/2017 10:26:15.911] [webserver-akka.actor.default-dispatcher-18] [akka://webserver/system/IO-TCP/selectors/$a] stopped
[DEBUG] [02/01/2017 10:26:15.911] [webserver-akka.actor.default-dispatcher-13] [akka://webserver/system/IO-TCP/selectors] stopped
[DEBUG] [02/01/2017 10:26:15.912] [webserver-akka.actor.default-dispatcher-8] [akka://webserver/system/IO-TCP] stopped
[DEBUG] [02/01/2017 10:26:15.912] [webserver-akka.actor.default-dispatcher-13] [akka://webserver/system] stopped
[DEBUG] [02/01/2017 10:26:15.912] [webserver-akka.actor.default-dispatcher-18] [akka://webserver/] received AutoReceiveMessage Envelope(Terminated(Actor[akka://webserver/system]),Actor[akka://webserver/system])
[DEBUG] [02/01/2017 10:26:15.913] [webserver-akka.actor.default-dispatcher-18] [akka://webserver/] stopped
It seems I can't launch my app in background.
How can i do for launching it in background?
here is my Main.scala:
object WebServer extends App {
implicit val system = ActorSystem("webserver")
implicit val materializer = ActorMaterializer()
// needed for the future flatMap/onComplete in the end
implicit val executionContext = system.dispatcher
val ipServer = InetAddress.getLocalHost().getHostAddress()
val configApplication = ConfigFactory.load("application")
val serverFile = new File(args(0))
val configServer = ConfigFactory.parseFile(serverFile)
val routes =
new AppActiveRoute(system.actorOf(AppActiveHandler.props(), "AppActiveHandler")).route
val bindingFuture = Http().bindAndHandle(
routes,
ipServer,
configApplication.getInt("http.port")
)
println("serv http launch")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => {
cluster.close()
system.terminate()
})
bindingFuture.onFailure {
case ex: Exception =>
println(ex, "Failed to bind to {}:{}!", ipServer, configApplication.getInt("http.port"))
}

If your application is supposed to be dockerized and run in a container, you probably don't need this part
StdIn.readLine()
bindingFuture
.flatMap(_.unbind()) // trigger unbinding from the port
.onComplete(_ => {
cluster.close()
system.terminate()
})
Try deleting it.
If you care to gracefully shutdown your actor system when the VM exits, you can use a shutdown hook, as specified in this answer.

Related

Lagom create static µ-service cluster

i wrote a few µ-services with lagom.
I have a docker-compose file in which all µ-services will be manual created.
Now I have a problem.
If I use discovery with akka-dns their is nothing found.
If I use static and give them the exposed ports their is also not found. Where is my mistake?
Docker compose file:
security:
container_name: panakeia-security
image: nexus.familieschmidt.online/andre-user/panakeia/security-impl:latest
environment:
- APPLICATION_SECRET=hlPlU12MK?[oF1Xj`>xd>CtCjTHohfu0=ekVFOo?r]lH^GpFo5o?kurLFO6sQPzD
- POSTGRESQL_URL=jdbc:postgresql://****SERVER-IP****:12000/panakeia
- POSTGRESQL_USERNAME=panakeia
- POSTGRESQL_PASSWORD=123456789
- INIT_USERNAME=test#test.de
- INIT_USERPASS=123456
- REQUIRED_CONTACT_POINT_NR=1
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null
expose:
- 9000
- 15000
# - 8558
ports:
- "15000:15000"
- "14999:9000"
networks:
- panakeia-network
My loader class
class SecurityLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with ConfigurationServiceLocatorComponents {
// override def staticServiceUri: URI = URI.create("http://localhost:9000")
} //AkkaDiscoveryComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[SecurityService])
}
and my production.conf
include "application"
play {
server {
pidfile.path = "/dev/null"
}
http.secret.key = "${APPLICATION_SECRET}"
}
db.default {
url = ${POSTGRESQL_URL}
username = ${POSTGRESQL_USERNAME}
password = ${POSTGRESQL_PASSWORD}
}
user.init {
username = ${INIT_USERNAME}
password = ${INIT_USERPASS}
}
pac4j.jwk = {"kty":"oct","k":${JWK_KEY},"alg":"HS512"}
lagom.persistence.jdbc.create-tables.auto = true
//akka {
// discovery.method = akka-dns
//
// cluster {
// shutdown-after-unsuccessful-join-seed-nodes = 60s
// }
//
// management {
// cluster.bootstrap {
// contact-point-discovery {
// discovery-method = akka.discovery
//// discovery-method = akka-dns
// service-name = "security-service"
// required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
// }
// }
// }
//}
lagom.services {
security-impl = "http://****SERVER-IP****:15000"
// serviceB = "http://10.1.2.4:8080"
}
For other domains on the same server i have a reverse nginx. But not for this project.
When I run http://SERVER-IP:14999 in browser I get a 404 NotFound typical Play screen message. :(
I have no problem to write the compose file and to link them manual. I have also no problem to use akka-dns. But I will have no kubernetes or something else.
Thanks for you help
Here is a log of one the µ-service
2021-04-02T14:38:24.151Z [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
2021-04-02T14:38:24.409Z [info] akka.remote.artery.tcp.ArteryTcpTransport [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ArteryTcpTransport(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.409UTC] - Remoting started with transport [Artery tcp]; listening on address [akka://application#192.168.0.5:25520] with UID [-580933006609059378]
2021-04-02T14:38:24.427Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.427UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Starting up, Akka version [2.6.8] ...
2021-04-02T14:38:24.527Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Registered cluster JMX MBean [akka:type=Cluster]
2021-04-02T14:38:24.528Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Started up successfully
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
2021-04-02T14:38:25.383Z [info] akka.io.DnsExt [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=DnsExt(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.383UTC] - Creating async dns resolver async-dns with manager name SD-DNS
2021-04-02T14:38:25.386Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.386UTC] - Bootstrap using default `akka.discovery` method: AggregateServiceDiscovery
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading readiness checks List(NamedHealthCheck(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck))
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading liveness checks List()
2021-04-02T14:38:25.488Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.488UTC] - Binding Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:25.541Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.541UTC] - Including HTTP management routes for ClusterHttpManagementRouteProvider
2021-04-02T14:38:25.581Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.581UTC] - Including HTTP management routes for ClusterBootstrap
2021-04-02T14:38:25.585Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.584UTC] - Using self contact point address: http://192.168.0.5:8558
2021-04-02T14:38:25.600Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.600UTC] - Including HTTP management routes for HealthCheckRoutes
2021-04-02T14:38:26.108Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.108UTC] - Bound Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:26.154Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.154UTC] - Initiating bootstrap procedure using akka.discovery method...
2021-04-02T14:38:26.163Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Locating service members. Using discovery [akka.discovery.aggregate.AggregateServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider]
2021-04-02T14:38:26.164Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:26.189Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.189UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:26.195Z [info] play.api.db.DefaultDBApi [] - Database [default] initialized
2021-04-02T14:38:26.200Z [info] play.api.db.HikariCPConnectionPool [] - Creating Pool for datasource 'default'
2021-04-02T14:38:26.207Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Starting...
2021-04-02T14:38:26.218Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Start completed.
2021-04-02T14:38:26.221Z [info] play.api.db.HikariCPConnectionPool [] - datasource [default] bound to JNDI as DefaultDS
2021-04-02T14:38:26.469Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileProcessor, sourceActorSystem=application, akkaTimestamp=14:38:26.469UTC] - ProfileProcessor: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.480Z [info] akka.cluster.sharding.typed.scaladsl.ClusterSharding [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterSharding(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.480UTC] - Starting Shard Region [ProfileEntity]...
2021-04-02T14:38:26.483Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-3, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileEntity, sourceActorSystem=application, akkaTimestamp=14:38:26.483UTC] - ProfileEntity: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.534Z [info] play.api.Play [] - Application started (Prod) (no global state)
2021-04-02T14:38:26.553Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0.0.0.0:9000
2021-04-02T14:38:27.188Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:27.187UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:27.363Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.363UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:27.369Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.369UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:28.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:28.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:28.563Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.563UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:28.564Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.564UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:29.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:29.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:29.724Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-22, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.723UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:29.729Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.729UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:30.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:30.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
It looks like the startup process is completly going wrong.
Inside of the securityapplication class it should be generated a user with this code
val newUUID = UUID.randomUUID()
clusterSharding.entityRefFor(Profile.typedKey,newUUID.toString).ask[ProfileConfirmation](reply => CreateProfile(username,password,Some(EmailAddress(username)),Set(SuperAdminRole), PersonalData("","",""),reply ))(30.seconds).map{
case ProfileCmdAccepted(profile) => s"Profile ${profile.login} created"
case ProfileCmdRejected(err) => s"ERROR while default user init: $err"
case a => s"a: $a"
}
But there is throwing a Timeout exception. I think the cluster service is not initialized. At demo all working.
Do someone has an idea?
Okay found it
in the production conf I added
lagom.services {
security-service = "http://"${REMOTE_IP}":15000"
binary-service = "http://"${REMOTE_IP}":15001"
}
lagom.cluster.bootstrap.enabled = false
In the lagom Application loader in method load I inheriated my application from
ConfigurationServiceLocatorComponents
And now really important!
The docker must configured!
I used docker compose.
First adding a netowrk!
networks:
panakeia-network:
# driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
AND now I give the cluster.seed-nodes.0 the ip of the service! And name the system! That is also important!
security:
container_name: panakeia-security
image: security-impl:latest
environment:
- APPLICATION_SECRET=****
- POSTGRESQL_URL=jdbc:postgresql://postgres-database:5432/panakeia
- POSTGRESQL_USERNAME=????
- POSTGRESQL_PASSWORD=******
- INIT_USERNAME=test#test.de
- INIT_USERPASS=*******
- JWK_KEY=******
- REQUIRED_CONTACT_POINT_NR=1
- KAFKA_BROKER=REALLY_IP/OR_KAFKA_SERVICE_NAME:9092
- REMOTE_IP=????
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null -Dakka.cluster.seed-nodes.0=akka://panakeia#172.28.1.5:25520 -Dplay.akka.actor-system=panakeia
expose:
- 9000
- 15000
ports:
- "15000:15000"
- "14999:9000"
networks:
panakeia-network:
ipv4_address: 172.28.1.5
With this way it looks like it possible to create a simple and small µ-services cluster without complex systems like kubernetes.

akka-http no stack trace or details on error

I got a structure which can basically be summarized as:
outside user makes a rest request to akka-http server
akka-http makes a request(query?) to a (some)data source using asynchttpclient
akka-http transforms the result from asynchttpclient and serves it back to user
At some point I am getting an error from akka which tells me almost nothing. This error happens right after the asynchttpclient returns me some results. (I can infact at this point print the results on the log, they are there parsed from json etc.. but akka had already errored out)
Even in debug logging level I got no decipherable error message from akka or a stacktrace.
only message I got is:
2017-03-24 17:22:55 INFO CompanyRepository:111 - search company with name:"somecompanyname"
2017-03-24 17:22:55 INFO CompanyRepository:73 - [QUERY TIME]: 527ms
[ERROR] [03/24/2017 17:22:55.951] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
This error message is the only thing I get. Relevant parts of my config:
akka {
loglevel = "DEBUG"
# edit -- tested with sl4jlogger with no change
#loggers = ["akka.event.slf4j.Slf4jLogger"]
#logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
parsing {
max-content-length = 800m
max-chunk-size = 100m
}
server {
server-header = akka-http/${akka.http.version}
idle-timeout = 120 s
request-timeout = 120 s
bind-timeout = 10s
max-connections = 1024
pipelining-limit = 32
verbose-error-messages = on
}
client {
user-agent-header = akka-http/${akka.http.version}
}
host-connection-pool {
max-connections = 4
}
}
akka.http.routing {
verbose-error-messages = on
}
Anyone knows if I can make akka to spit out more details about what/where the error is occurring?
Edit: I realized I do NOT get this same error on resultsets which are smaller in size. <- ignore
Edit 2:
Added akka.loglevel = DEBUG, spits out a lot more noise but still not detail about the actual error.
Converted asynchttpclient to akka quickly to rule out AHC
I already had a wrapper around my query to time it, added some logging there trying to pinpoint when exactly the error is happening.
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
logger.info( "successful feature:")
logger.info( FormattedString.prettyPrint(some))
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
resulting log:
2017-03-28 13:19:10 INFO CompanyRepository:111 - search company with name:"some company"
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] logger log1-Logging$DefaultLogger started
[DEBUG] [03/28/2017 13:19:10.497] [company-api-system-akka.actor.default-dispatcher-2] [EventStream(akka://xca-api-actor-system)] Default Loggers started
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] Initializing AkkaSSLConfig extension...
[DEBUG] [03/28/2017 13:19:10.613] [company-api-system-akka.actor.default-dispatcher-2] [AkkaSSLConfig(akka://xca-api-actor-system)] buildHostnameVerifier: created hostname verifier: com.typesafe.sslconfig.ssl.DefaultHostnameVerifier#779e2339
[DEBUG] [03/28/2017 13:19:10.633] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/user/pool-master/PoolInterfaceActor-0] (Re-)starting host connection pool to localhost:27474
[DEBUG] [03/28/2017 13:19:10.727] [xca-api-actor-system-akka.actor.default-dispatcher-3] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Resolving localhost before connecting
[DEBUG] [03/28/2017 13:19:10.740] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-DNS] Resolution request for localhost from Actor[akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0#-815754478]
[DEBUG] [03/28/2017 13:19:10.749] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/28/2017 13:19:10.751] [xca-api-actor-system-akka.actor.default-dispatcher-4] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
2017-03-28 13:19:10 INFO CompanyRepository:73 - [QUERY TIME]: 376ms
2017-03-28 13:19:10 INFO CompanyRepository:77 - successful feature:
[ERROR] [03/28/2017 13:19:10.896] [company-api-system-akka.actor.default-dispatcher-7] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
2017-03-28 13:19:10 INFO CompanyRepository:78 - SearchResult(List(
( prettyprint output here!!! lots and lots of legit result, json parsed succcesfully into a bunch of case classes)
as you can see my logging format and akkas' are different, the ERROR is coming from akka with do details, while everything looks like working.
Edit 3: logs with sleep in between calls
new query timer function with sleeps
def queryTimer[ R <: Future[ Any ] ]( block: => R ): R = {
val t0 = System.currentTimeMillis()
val result = block
result.onComplete { maybeResult =>
val t1 = System.currentTimeMillis()
logger.info( "[QUERY TIME]: " + ( t1 - t0 ) + "ms" )
maybeResult match {
case Success(some) =>
Thread.sleep(500)
logger.info( "successful feature:")
Thread.sleep(500)
logger.info( FormattedString.prettyPrint(some))
Thread.sleep(500)
logger.info("we are there!")
case Failure(someFailure) =>
logger.info( "failed feature:")
logger.debug( FormattedString.prettyPrint(someFailure))
}
}
result
}
logs with sleeps
[DEBUG] [03/30/2017 11:11:58.629] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Attempting connection to [localhost/127.0.0.1:27474]
[DEBUG] [03/30/2017 11:11:58.631] [xca-api-actor-system-akka.actor.default-dispatcher-7] [akka://xca-api-actor-system/system/IO-TCP/selectors/$a/0] Connection established to [localhost:27474]
11:11:59.442 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:11:59.496 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.250 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - [QUERY TIME]: 1880ms
[ERROR] [03/30/2017 11:12:00.265] [company-api-system-akka.actor.default-dispatcher-3] [akka.actor.ActorSystemImpl(company-api-system)] Error during processing of request: 'requirement failed'. Completing with 500 Internal Server Error response.
11:12:00.543 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.597 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:00.752 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - successful feature:
11:12:01.645 [pool-2-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.697 [pool-1-thread-1] DEBUG o.a.netty.channel.DefaultChannelPool - Closed 0 connections out of 0 in 0 ms
11:12:01.750 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - SearchResult(List( "lots of legit result here"
11:12:02.281 [ForkJoinPool-2-worker-15] INFO c.s.s.r.neo4j.CompanyRepository - we are there!
Edit 4 and solution!
Apparently the default exception handler does not print a stack trace! overriding the exception handler with a very basic catch all:
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case e: Exception => {
logger.info("---------------- exception log start")
logger.error(e.getMessage, e)
logger.error("cause" , e.getCause)
logger.error("cause" , e.getStackTraceString )
logger.info( FormattedString.prettyPrint(e))
logger.info("---------------- exception log end")
Directives.complete("server made a boo boo")
}
}
results in a stack trace that befuddles the sh*t out of me!!
11:42:04.634 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log start
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - requirement failed
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:212) ~[scala-library-2.11.8.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:121) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.BasicFormats$StringJsonFormat$.write(BasicFormats.scala:119) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:73) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$3.write(ProductFormatsInstances.scala:68) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.PimpedAny.toJson(package.scala:39) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1$$anonfun$write$1.apply(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at scala.collection.immutable.List.map(List.scala:273) ~[scala-library-2.11.8.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.CollectionFormats$$anon$1.write(CollectionFormats.scala:25) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormats$class.productElement2Field(ProductFormats.scala:46) ~[spray-json_2.11-1.3.2.jar:na]
at com.stepweb.scarifgate.services.CompanyService.productElement2Field(CompanyService.scala:14) ~[classes/:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:30) ~[spray-json_2.11-1.3.2.jar:na]
at spray.json.ProductFormatsInstances$$anon$1.write(ProductFormatsInstances.scala:26) ~[spray-json_2.11-1.3.2.jar:na]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport$$anonfun$sprayJsonMarshaller$1.apply(SprayJsonSupport.scala:62) ~[akka-http-spray-json_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:19) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$optionMarshaller$1$$anonfun$apply$1.apply(GenericMarshallers.scala:18) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:58) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.PredefinedToResponseMarshallers$$anonfun$fromStatusCodeAndHeadersAndValue$1$$anonfun$apply$5.apply(PredefinedToResponseMarshallers.scala:57) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anonfun$compose$1$$anonfun$apply$15.apply(Marshaller.scala:73) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.ToResponseMarshallable$$anonfun$1$$anonfun$apply$1.apply(ToResponseMarshallable.scala:29) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.Marshaller$$anon$1.apply(Marshaller.scala:92) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.marshalling.GenericMarshallers$$anonfun$futureMarshaller$1$$anonfun$apply$3$$anonfun$apply$4.apply(GenericMarshallers.scala:33) ~[akka-http_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$.akka$http$scaladsl$util$FastFuture$$strictTransform$1(FastFuture.scala:41) ~[akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:51) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at akka.http.scaladsl.util.FastFuture$$anonfun$transformWith$extension1$1.apply(FastFuture.scala:50) [akka-http-core_2.11-10.0.0.jar:10.0.0]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) [scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:39) [akka-actor_2.11-2.4.16.jar:na]
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:415) [akka-actor_2.11-2.4.16.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) [scala-library-2.11.8.jar:na]
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) [scala-library-2.11.8.jar:na]
11:42:04.640 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.641 [company-api-system-akka.actor.default-dispatcher-2] ERROR c.stepweb.scarifgate.CompanyApiApp$ - cause
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - java.lang.IllegalArgumentException: requirement failed
11:42:04.644 [company-api-system-akka.actor.default-dispatcher-2] INFO c.stepweb.scarifgate.CompanyApiApp$ - ---------------- exception log end
so... the exception is caused here in spray.json.BasicFormats
implicit object StringJsonFormat extends JsonFormat[String] {
def write(x: String) = {
require(x ne null) // <-----------------------------------
JsString(x)
}
def read(value: JsValue) = value match {
case JsString(x) => x
case x => deserializationError("Expected String as JsString, but got " + x)
}
}
which sort of means one of the strings in this thousands of lines of response is null. Special thanks goes to the laziness of using that "require" without a message. Debugging which string is empty where will be a nightmare but I still think akka should fail in a better way.
akka-http no stack trace or details on error
Well, default akka-http ExceptionHandler doesn't print stack trace and prints only error message or its class name if the message is empty but you can provide custom exception handler that will print anything you want (i.e. stack trace in your example).
Some examples of how to make a custom exception handler are provided at GitHub ExceptionHandlerExamplesSpec.spec
The simplest way in your case seems to be to define your own custom implicit exception handler
import akka.http.scaladsl.model._
import akka.http.scaladsl.server._
import StatusCodes._
import Directives._
implicit def myExceptionHandler: ExceptionHandler =
ExceptionHandler {
case NonFatal(e) =>
logger.error(s"Exception $e at\n${e.getStackTraceString}")
complete(HttpResponse(InternalServerError, entity = "Internal Server Error"))
}
}
Try setting the loggers as well - from your configuration it seems they're not set. Something like:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
}
Also, consider using akka-slf4j along with their recommended logging backend logback.
This should make akka spit more details.

Unknown UID warning when passing message to remote actor

Here is the code:
package com.packt.akka
import akka.actor.{Actor, ActorRef, ActorSystem, Props}
import com.typesafe.config.ConfigFactory
object MembersService {
val config = ConfigFactory.load.getConfig("MembersService")
val system = ActorSystem("MembersService", config)
val worker = system.actorOf(Props[Worker], "remote-worker")
println(s"Worker actor path is ${worker.path}")
}
object MemberServiceLookup {
val config = ConfigFactory.load.getConfig("MemberServiceLookup")
val system = ActorSystem("MemberServiceLookup", config)
val worker = system.actorSelection("akka.tcp://MembersService#127.0.0.1:2552/user/remote-worker")
worker ! Worker.Work("Hi Remote Actor")
}
object MembersServiceRemoteCreation extends App {
val config = ConfigFactory.load.getConfig("MembersServiceRemoteCreation")
val system = ActorSystem("MembersServiceRemoteCreation", config)
val workerActor = system.actorOf(Props[Worker], "workerActorRemote")
println(s"The remote path of worker Actor is ${workerActor.path}")
workerActor ! Worker.Work("Hi Remote Worker")
}
class Worker extends Actor {
import Worker._
def receive = {
case msg: Work =>
println(s"I received Work Message and My ActorRef: ${self}")
}
}
object Worker {
case class Work(message: String)
}
application.conf:
MembersService {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
}
MemberServiceLookup {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2553
}
}
}
}
MembersServiceRemoteCreation {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/workerActorRemote {
remote: "akka.tcp://MembersService#127.0.0.1:2552"
}
}
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2558
}
}
}
}
build.sbt:
name := "Hotswap Behavior"
version := "1.0"
scalaVersion := "2.11.7"
sbtVersion := "0.13.11"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % "2.4.0",
"com.typesafe.akka" %% "akka-remote" % "2.4.0")
Running the app gives the following output (and quickly heats up the CPU):
/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/java -Didea.launcher.port=7533 "-Didea.launcher.bin.path=/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath "/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jfxswt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/packager.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/tools.jar:/Users/kaiyin/IdeaProjects/Learning Akka (video) code/section5/akka-remoting/target/scala-2.11/classes:/Users/kaiyin/.ivy2/cache/org.uncommons.maths/uncommons-maths/jars/uncommons-maths-1.2.2a.jar:/Users/kaiyin/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/Users/kaiyin/.ivy2/cache/com.typesafe.akka/akka-actor_2.11/jars/akka-actor_2.11-2.4.0.jar:/Users/kaiyin/.ivy2/cache/com.typesafe.akka/akka-protobuf_2.11/jars/akka-protobuf_2.11-2.4.0.jar:/Users/kaiyin/.ivy2/cache/com.typesafe.akka/akka-remote_2.11/jars/akka-remote_2.11-2.4.0.jar:/Users/kaiyin/.ivy2/cache/io.netty/netty/bundles/netty-3.10.3.Final.jar:/Users/kaiyin/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.7.jar:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar" com.intellij.rt.execution.application.AppMain com.packt.akka.MembersServiceRemoteCreation
[INFO] [04/05/2016 16:54:43.982] [main] [akka.remote.Remoting] Starting remoting
[INFO] [04/05/2016 16:54:44.143] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558]
[INFO] [04/05/2016 16:54:44.144] [main] [akka.remote.Remoting] Remoting now listens on addresses: [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558]
The remote path of worker Actor is akka.tcp://MembersService#127.0.0.1:2552/remote/akka.tcp/MembersServiceRemoteCreation#127.0.0.1:2558/user/workerActorRemote
[WARN] [04/05/2016 16:54:44.262] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-6] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FMembersService%40127.0.0.1%3A2552-0] Association with remote system [akka.tcp://MembersService#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://MembersService#127.0.0.1:2552]] Caused by: [Connection refused: /127.0.0.1:2552]
[INFO] [04/05/2016 16:54:44.266] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.DaemonMsgCreate] from Actor[akka://MembersServiceRemoteCreation/deadLetters] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:44.266] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [com.packt.akka.Worker$Work] from Actor[akka://MembersServiceRemoteCreation/deadLetters] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:45.165] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:46.162] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:47.164] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [5] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:48.163] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:49.162] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[WARN] [04/05/2016 16:54:49.285] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-5] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FMembersService%40127.0.0.1%3A2552-0] Association with remote system [akka.tcp://MembersService#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://MembersService#127.0.0.1:2552]] Caused by: [Connection refused: /127.0.0.1:2552]
[INFO] [04/05/2016 16:54:50.161] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:51.161] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:52.171] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[WARN] [04/05/2016 16:54:54.304] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-5] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FMembersService%40127.0.0.1%3A2552-0] Association with remote system [akka.tcp://MembersService#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://MembersService#127.0.0.1:2552]] Caused by: [Connection refused: /127.0.0.1:2552]
[WARN] [04/05/2016 16:54:59.171] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-6] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/remote-watcher] Detected unreachable: [akka.tcp://MembersService#127.0.0.1:2552]
[WARN] [04/05/2016 16:54:59.171] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-5] [akka.remote.Remoting] Association to [akka.tcp://MembersService#127.0.0.1:2552] with unknown UID is reported as quarantined, but address cannot be quarantined without knowing the UID, gating instead for 5000 ms.
Any idea what went wrong here?
Looking at the config ,the port config is wrong.
Caused by: [Connection refused: /127.0.0.1:2552]
Ensure you have a Akka node listening on that port.

akka simple cluster with two seed nodes

I'm using akka cluster 2.3.6
I compiled two separated jars and inside I have main class
val configuration = ConfigFactory.load()
val bucket = configuration.getString("bucket")
val system = ActorSystem(bucket,configuration)
val resultDispatcherActor = system.actorOf(Props(new MyActor))
I'm running two separate jars:
java -Dconfig.file=poc.conf -jar poc.jar
where my poc.conf is the following:
akka {
loglevel = INFO
stdout-loglevel = INFO  
event-handlers = ["akka.event.Logging$DefaultLogger"]
actor {    
provider = "akka.cluster.ClusterActorRefProvider"
  }
remote{
enabled-transports = ["akka.remote.netty.tcp"]    
log-remote-lifecycle-events = off
netty.tcp {      
hostname = ""      
host = "10.0.0.5"     
port = 2551
}    
}
cluster {
seed-nodes = [
"akka.tcp://myCluster#10.0.0.5:2551",
"akka.tcp://myCluster#10.0.0.5:2552"]
 
}
}
inside netty.tcp block each first application assigned to port 2551 and part 2552.
However when I start both jars, each of them prints the following log:
[INFO] [11/16/2014 18:43:53.890] [main] [Remoting] Starting remoting
[INFO] [11/16/2014 18:43:54.739] [main] [Remoting] Remoting started; listening on addresses :[akka.tcp://myCluster#10.0.0.5:2551]
[INFO] [11/16/2014 18:43:54.757] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Starting up...
[INFO] [11/16/2014 18:43:54.848] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Registered cluster JMX MBean [akka:type=Cluster]
[INFO] [11/16/2014 18:43:54.848] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Started up successfully
[INFO] [11/16/2014 18:43:54.858] [myCluster-akka.actor.default-dispatcher-15] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Metrics will be retreived from MBeans, and may be incorrect on some platforms. To increase metric accuracy add the 'sigar.jar' to the classpath and the appropriate platform-specific native libary to 'java.library.path'. Reason: java.lang.ClassNotFoundException: org.hyperic.sigar.Sigar
[INFO] [11/16/2014 18:43:54.863] [myCluster-akka.actor.default-dispatcher-15] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Metrics collection has started successfully
[WARN] [11/16/2014 18:43:54.963] [myCluster-akka.remote.default-remote-dispatcher-6] [Remoting] Tried to associate with unreachable remote address [akka.tcp://myCluster#10.0.0.5:2552]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: /10.0.0.5:2552
[INFO] [11/16/2014 18:43:54.972] [myCluster-akka.actor.default-dispatcher-16] [akka://myCluster/deadLetters] Message [akka.cluster.InternalClusterAction$InitJoin$] from Actor[akka://myCluster/system/cluster/core/daemon/firstSeedNodeProcess-1#-86755168] to Actor[akka://myCluster/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
What am I doing wrong?
You need to provide a port to the program when you run it. Your netty.tcp.port value defaults to 2551 in both cases.
An example of how to do this:
// 0 means a random unused port will be assigned
val port = if (args.isEmpty) "0" else args(0)
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port") //etc
Also, see the cluster samples for an example on how to do this.
Make the seed-node to have the same port 2551 and specify under remote as below.
remote{
transport = "akka.remote.netty.NettyRemoteTransport"
cluster {
seed-nodes = [
"akka.tcp://myCluster#10.0.0.5:2551",
"akka.tcp://myCluster#10.0.0.5:2551"]
use-dispatcher = cluster-dispatcher
}
cluster-dispatcher {
type = "Dispatcher"
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-max = 4
}
}

Akka Remote Actors- : can not able to establish a connection between local and remote actors

Hi I am following this tutorial and I copy pasted the code as it is except I changed the port from 5150 to 2552 and I am facing thease errors
HelloLocal project errors
[warn] there were 1 deprecation warning(s); re-run with -deprecation for details
[warn] one warning found
[info] Running HelloLocal
[INFO] [11/04/2014 22:37:50.707] [run-main-0] [Remoting] Starting remoting
[INFO] [11/04/2014 22:37:51.857] [run-main-0] [Remoting] Remoting started; listening on addresses :[akka.tcp://LocalSystem#127.0.1.1:2552]
[INFO] [11/04/2014 22:37:51.863] [run-main-0] [Remoting] Remoting now listens on addresses: [akka.tcp://LocalSystem#127.0.1.1:2552]
[ERROR] [11/04/2014 22:37:51.904] [LocalSystem-akka.actor.default-dispatcher-3] [RemoteActorRefProvider] Error while looking up address [akka://HelloRemoteSystem#127.0.0.1:2552]
akka.remote.RemoteTransportException: No transport is loaded for protocol: [akka], available protocols: [akka.tcp]
at akka.remote.Remoting$.localAddressForRemote(Remoting.scala:88)
at akka.remote.Remoting.localAddressForRemote(Remoting.scala:130)
at akka.remote.RemoteActorRefProvider.actorFor(RemoteActorRefProvider.scala:321)
at akka.actor.ActorRefFactory$class.actorFor(ActorRefProvider.scala:258)
at akka.actor.ActorCell.actorFor(ActorCell.scala:369)
at LocalActor.<init>(HelloLocal.scala:7)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at akka.util.Reflect$.instantiate(Reflect.scala:45)
at akka.actor.NoArgsReflectConstructor.produce(Props.scala:361)
at akka.actor.Props.newActor(Props.scala:252)
at akka.actor.ActorCell.newActor(ActorCell.scala:552)
at akka.actor.ActorCell.create(ActorCell.scala:578)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[INFO] [11/04/2014 22:37:51.913] [LocalSystem-akka.actor.default-dispatcher-5] [akka://HelloRemoteSystem#127.0.0.1:2552/user/RemoteActor] Message [java.lang.String] from Actor[akka://LocalSystem/user/LocalActor#543076206] to Actor[akka://HelloRemoteSystem#127.0.0.1:2552/user/RemoteActor] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
HelloRemote project errors are:
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [11/04/2014 22:38:40.915] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#-1308265074] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
I am new in akka and for learning purpose I am following this tutorial and now there are errors please help me to solve them
edit after following the akka-remitng 2.3.6 now i am having different errors
aplication.conf (helloLocal)
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
}
HelloLoca.scala
import akka.actor._
class LocalActor extends Actor {
//create the remote actor
val remote = context.actorSelection("akka.tcp://HelloRemoteSystem#127.0.0.1:2552/user/RemoteActor")
var counter = 0
def receive = {
case "START" =>
remote ! "Hello from the LocalActor"
case msg: String =>
println(s"LocalActor received message: '$msg'")
if (counter < 5) {
sender ! "Hello back to you"
counter += 1
}
}
}
object HelloLocal extends App{
implicit val system =ActorSystem("LocalSystem")
val localActor =system.actorOf(Props[LocalActor],name="LocalActor")
localActor ! "START"
}
and HelloRemote application.conf is
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
These are the errors now
HelloLocal
[info] Running HelloLocal
[INFO] [11/05/2014 12:42:32.674] [run-main-1] [Remoting] Starting remoting
[INFO] [11/05/2014 12:42:34.031] [run-main-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://LocalSystem#127.0.0.1:45047]
[INFO] [11/05/2014 12:42:34.040] [run-main-1] [Remoting] Remoting now listens on addresses: [akka.tcp://LocalSystem#127.0.0.1:45047]
[WARN] [11/05/2014 12:42:34.367] [LocalSystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://LocalSystem#127.0.0.1:45047/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FHelloRemoteSystem%40127.0.0.1%3A2552-0/endpointWriter] AssociationError [akka.tcp://LocalSystem#127.0.0.1:45047] -> [akka.tcp://HelloRemoteSystem#127.0.0.1:2552]: Error [Invalid address: akka.tcp://HelloRemoteSystem#127.0.0.1:2552] [
akka.remote.InvalidAssociation: Invalid address: akka.tcp://HelloRemoteSystem#127.0.0.1:2552
Caused by: akka.remote.transport.Transport$InvalidAssociationException: Connection refused: /127.0.0.1:2552
]
[WARN] [11/05/2014 12:42:34.419] [LocalSystem-akka.remote.default-remote-dispatcher-13] [Remoting] Tried to associate with unreachable remote address [akka.tcp://HelloRemoteSystem#127.0.0.1:2552]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: /127.0.0.1:2552
[INFO] [11/05/2014 12:42:34.451] [LocalSystem-akka.actor.default-dispatcher-2] [akka://LocalSystem/deadLetters] Message [java.lang.String] from Actor[akka://LocalSystem/user/LocalActor#1798933307] to Actor[akka://LocalSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
HelloRemote errors are
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [11/05/2014 12:42:35.654] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#1925624739] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
You are probably using a more up to date version of akka than was used in the tutorial.
For 2.2.3 and above your configuration needs to resemble
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
Depending on your version you can find more information at : http://doc.akka.io/docs/akka/2.3.6/scala/remoting.html