akka simple cluster with two seed nodes - scala

I'm using akka cluster 2.3.6
I compiled two separated jars and inside I have main class
val configuration = ConfigFactory.load()
val bucket = configuration.getString("bucket")
val system = ActorSystem(bucket,configuration)
val resultDispatcherActor = system.actorOf(Props(new MyActor))
I'm running two separate jars:
java -Dconfig.file=poc.conf -jar poc.jar
where my poc.conf is the following:
akka {
loglevel = INFO
stdout-loglevel = INFO  
event-handlers = ["akka.event.Logging$DefaultLogger"]
actor {    
provider = "akka.cluster.ClusterActorRefProvider"
  }
remote{
enabled-transports = ["akka.remote.netty.tcp"]    
log-remote-lifecycle-events = off
netty.tcp {      
hostname = ""      
host = "10.0.0.5"     
port = 2551
}    
}
cluster {
seed-nodes = [
"akka.tcp://myCluster#10.0.0.5:2551",
"akka.tcp://myCluster#10.0.0.5:2552"]
 
}
}
inside netty.tcp block each first application assigned to port 2551 and part 2552.
However when I start both jars, each of them prints the following log:
[INFO] [11/16/2014 18:43:53.890] [main] [Remoting] Starting remoting
[INFO] [11/16/2014 18:43:54.739] [main] [Remoting] Remoting started; listening on addresses :[akka.tcp://myCluster#10.0.0.5:2551]
[INFO] [11/16/2014 18:43:54.757] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Starting up...
[INFO] [11/16/2014 18:43:54.848] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Registered cluster JMX MBean [akka:type=Cluster]
[INFO] [11/16/2014 18:43:54.848] [main] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Started up successfully
[INFO] [11/16/2014 18:43:54.858] [myCluster-akka.actor.default-dispatcher-15] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Metrics will be retreived from MBeans, and may be incorrect on some platforms. To increase metric accuracy add the 'sigar.jar' to the classpath and the appropriate platform-specific native libary to 'java.library.path'. Reason: java.lang.ClassNotFoundException: org.hyperic.sigar.Sigar
[INFO] [11/16/2014 18:43:54.863] [myCluster-akka.actor.default-dispatcher-15] [Cluster(akka://myCluster)] Cluster Node [akka.tcp://myCluster#10.0.0.5:2551] - Metrics collection has started successfully
[WARN] [11/16/2014 18:43:54.963] [myCluster-akka.remote.default-remote-dispatcher-6] [Remoting] Tried to associate with unreachable remote address [akka.tcp://myCluster#10.0.0.5:2552]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: /10.0.0.5:2552
[INFO] [11/16/2014 18:43:54.972] [myCluster-akka.actor.default-dispatcher-16] [akka://myCluster/deadLetters] Message [akka.cluster.InternalClusterAction$InitJoin$] from Actor[akka://myCluster/system/cluster/core/daemon/firstSeedNodeProcess-1#-86755168] to Actor[akka://myCluster/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
What am I doing wrong?

You need to provide a port to the program when you run it. Your netty.tcp.port value defaults to 2551 in both cases.
An example of how to do this:
// 0 means a random unused port will be assigned
val port = if (args.isEmpty) "0" else args(0)
val config = ConfigFactory.parseString(s"akka.remote.netty.tcp.port=$port") //etc
Also, see the cluster samples for an example on how to do this.

Make the seed-node to have the same port 2551 and specify under remote as below.
remote{
transport = "akka.remote.netty.NettyRemoteTransport"
cluster {
seed-nodes = [
"akka.tcp://myCluster#10.0.0.5:2551",
"akka.tcp://myCluster#10.0.0.5:2551"]
use-dispatcher = cluster-dispatcher
}
cluster-dispatcher {
type = "Dispatcher"
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-max = 4
}
}

Related

Lagom create static µ-service cluster

i wrote a few µ-services with lagom.
I have a docker-compose file in which all µ-services will be manual created.
Now I have a problem.
If I use discovery with akka-dns their is nothing found.
If I use static and give them the exposed ports their is also not found. Where is my mistake?
Docker compose file:
security:
container_name: panakeia-security
image: nexus.familieschmidt.online/andre-user/panakeia/security-impl:latest
environment:
- APPLICATION_SECRET=hlPlU12MK?[oF1Xj`>xd>CtCjTHohfu0=ekVFOo?r]lH^GpFo5o?kurLFO6sQPzD
- POSTGRESQL_URL=jdbc:postgresql://****SERVER-IP****:12000/panakeia
- POSTGRESQL_USERNAME=panakeia
- POSTGRESQL_PASSWORD=123456789
- INIT_USERNAME=test#test.de
- INIT_USERPASS=123456
- REQUIRED_CONTACT_POINT_NR=1
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null
expose:
- 9000
- 15000
# - 8558
ports:
- "15000:15000"
- "14999:9000"
networks:
- panakeia-network
My loader class
class SecurityLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with ConfigurationServiceLocatorComponents {
// override def staticServiceUri: URI = URI.create("http://localhost:9000")
} //AkkaDiscoveryComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new SecurityApplication(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[SecurityService])
}
and my production.conf
include "application"
play {
server {
pidfile.path = "/dev/null"
}
http.secret.key = "${APPLICATION_SECRET}"
}
db.default {
url = ${POSTGRESQL_URL}
username = ${POSTGRESQL_USERNAME}
password = ${POSTGRESQL_PASSWORD}
}
user.init {
username = ${INIT_USERNAME}
password = ${INIT_USERPASS}
}
pac4j.jwk = {"kty":"oct","k":${JWK_KEY},"alg":"HS512"}
lagom.persistence.jdbc.create-tables.auto = true
//akka {
// discovery.method = akka-dns
//
// cluster {
// shutdown-after-unsuccessful-join-seed-nodes = 60s
// }
//
// management {
// cluster.bootstrap {
// contact-point-discovery {
// discovery-method = akka.discovery
//// discovery-method = akka-dns
// service-name = "security-service"
// required-contact-point-nr = ${REQUIRED_CONTACT_POINT_NR}
// }
// }
// }
//}
lagom.services {
security-impl = "http://****SERVER-IP****:15000"
// serviceB = "http://10.1.2.4:8080"
}
For other domains on the same server i have a reverse nginx. But not for this project.
When I run http://SERVER-IP:14999 in browser I get a 404 NotFound typical Play screen message. :(
I have no problem to write the compose file and to link them manual. I have also no problem to use akka-dns. But I will have no kubernetes or something else.
Thanks for you help
Here is a log of one the µ-service
2021-04-02T14:38:24.151Z [info] akka.event.slf4j.Slf4jLogger [] - Slf4jLogger started
2021-04-02T14:38:24.409Z [info] akka.remote.artery.tcp.ArteryTcpTransport [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ArteryTcpTransport(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.409UTC] - Remoting started with transport [Artery tcp]; listening on address [akka://application#192.168.0.5:25520] with UID [-580933006609059378]
2021-04-02T14:38:24.427Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.427UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Starting up, Akka version [2.6.8] ...
2021-04-02T14:38:24.527Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Registered cluster JMX MBean [akka:type=Cluster]
2021-04-02T14:38:24.528Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.527UTC] - Cluster Node [akka://application#192.168.0.5:25520] - Started up successfully
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No downing-provider-class configured, manual cluster downing required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#downing
2021-04-02T14:38:24.552Z [info] akka.cluster.Cluster [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-2, akkaSource=Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:24.552UTC] - Cluster Node [akka://application#192.168.0.5:25520] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/typed/cluster.html#joining
2021-04-02T14:38:25.383Z [info] akka.io.DnsExt [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=DnsExt(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.383UTC] - Creating async dns resolver async-dns with manager name SD-DNS
2021-04-02T14:38:25.386Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.386UTC] - Bootstrap using default `akka.discovery` method: AggregateServiceDiscovery
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading readiness checks List(NamedHealthCheck(cluster-membership,akka.management.cluster.scaladsl.ClusterMembershipCheck))
2021-04-02T14:38:25.394Z [info] akka.management.internal.HealthChecksImpl [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=HealthChecksImpl(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.394UTC] - Loading liveness checks List()
2021-04-02T14:38:25.488Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.488UTC] - Binding Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:25.541Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.541UTC] - Including HTTP management routes for ClusterHttpManagementRouteProvider
2021-04-02T14:38:25.581Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.581UTC] - Including HTTP management routes for ClusterBootstrap
2021-04-02T14:38:25.585Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.584UTC] - Using self contact point address: http://192.168.0.5:8558
2021-04-02T14:38:25.600Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:25.600UTC] - Including HTTP management routes for HealthCheckRoutes
2021-04-02T14:38:26.108Z [info] akka.management.scaladsl.AkkaManagement [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.108UTC] - Bound Akka Management (HTTP) endpoint to: 192.168.0.5:8558
2021-04-02T14:38:26.154Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.154UTC] - Initiating bootstrap procedure using akka.discovery method...
2021-04-02T14:38:26.163Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Locating service members. Using discovery [akka.discovery.aggregate.AggregateServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider]
2021-04-02T14:38:26.164Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.163UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:26.189Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:26.189UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:26.195Z [info] play.api.db.DefaultDBApi [] - Database [default] initialized
2021-04-02T14:38:26.200Z [info] play.api.db.HikariCPConnectionPool [] - Creating Pool for datasource 'default'
2021-04-02T14:38:26.207Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Starting...
2021-04-02T14:38:26.218Z [info] com.zaxxer.hikari.HikariDataSource [] - HikariPool-1 - Start completed.
2021-04-02T14:38:26.221Z [info] play.api.db.HikariCPConnectionPool [] - datasource [default] bound to JNDI as DefaultDS
2021-04-02T14:38:26.469Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-4, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileProcessor, sourceActorSystem=application, akkaTimestamp=14:38:26.469UTC] - ProfileProcessor: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.480Z [info] akka.cluster.sharding.typed.scaladsl.ClusterSharding [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=main, akkaSource=ClusterSharding(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:26.480UTC] - Starting Shard Region [ProfileEntity]...
2021-04-02T14:38:26.483Z [info] akka.cluster.sharding.ShardRegion [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.internal-dispatcher-3, akkaSource=akka://application#192.168.0.5:25520/system/sharding/ProfileEntity, sourceActorSystem=application, akkaTimestamp=14:38:26.483UTC] - ProfileEntity: Idle entities will be passivated after [2.000 min]
2021-04-02T14:38:26.534Z [info] play.api.Play [] - Application started (Prod) (no global state)
2021-04-02T14:38:26.553Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0.0.0.0:9000
2021-04-02T14:38:27.188Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:27.187UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:27.363Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.363UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:27.369Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:27.369UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:28.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:28.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:28.563Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.563UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:28.564Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:28.564UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:29.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-5, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:29.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2021-04-02T14:38:29.724Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-22, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.723UTC] - Looking up [Lookup(application,None,Some(tcp))]
2021-04-02T14:38:29.729Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=akka://application#192.168.0.5:25520/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=14:38:29.729UTC] - Located service members based on: [Lookup(application,None,Some(tcp))]: [], filtered to []
2021-04-02T14:38:30.183Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [akkaAddress=akka://application#192.168.0.5:25520, sourceThread=application-akka.actor.default-dispatcher-18, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application, akkaTimestamp=14:38:30.183UTC] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
It looks like the startup process is completly going wrong.
Inside of the securityapplication class it should be generated a user with this code
val newUUID = UUID.randomUUID()
clusterSharding.entityRefFor(Profile.typedKey,newUUID.toString).ask[ProfileConfirmation](reply => CreateProfile(username,password,Some(EmailAddress(username)),Set(SuperAdminRole), PersonalData("","",""),reply ))(30.seconds).map{
case ProfileCmdAccepted(profile) => s"Profile ${profile.login} created"
case ProfileCmdRejected(err) => s"ERROR while default user init: $err"
case a => s"a: $a"
}
But there is throwing a Timeout exception. I think the cluster service is not initialized. At demo all working.
Do someone has an idea?
Okay found it
in the production conf I added
lagom.services {
security-service = "http://"${REMOTE_IP}":15000"
binary-service = "http://"${REMOTE_IP}":15001"
}
lagom.cluster.bootstrap.enabled = false
In the lagom Application loader in method load I inheriated my application from
ConfigurationServiceLocatorComponents
And now really important!
The docker must configured!
I used docker compose.
First adding a netowrk!
networks:
panakeia-network:
# driver: bridge
ipam:
driver: default
config:
- subnet: 172.28.0.0/16
AND now I give the cluster.seed-nodes.0 the ip of the service! And name the system! That is also important!
security:
container_name: panakeia-security
image: security-impl:latest
environment:
- APPLICATION_SECRET=****
- POSTGRESQL_URL=jdbc:postgresql://postgres-database:5432/panakeia
- POSTGRESQL_USERNAME=????
- POSTGRESQL_PASSWORD=******
- INIT_USERNAME=test#test.de
- INIT_USERPASS=*******
- JWK_KEY=******
- REQUIRED_CONTACT_POINT_NR=1
- KAFKA_BROKER=REALLY_IP/OR_KAFKA_SERVICE_NAME:9092
- REMOTE_IP=????
- JAVA_OPTS=-Dconfig.resource=prod-application.conf -Dplay.server.pidfile.path=/dev/null -Dakka.cluster.seed-nodes.0=akka://panakeia#172.28.1.5:25520 -Dplay.akka.actor-system=panakeia
expose:
- 9000
- 15000
ports:
- "15000:15000"
- "14999:9000"
networks:
panakeia-network:
ipv4_address: 172.28.1.5
With this way it looks like it possible to create a simple and small µ-services cluster without complex systems like kubernetes.

Kafka java.io.EOFException - NetworkReceive.readFromReadableChannel

I'm trying to connect to IBM Message Hub from Apache Spark 2.2.1 Structured Streaming.
The connection code is quite basic:
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder.appName("StreamingRetailTransactions").getOrCreate()
import spark.implicits._
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka04-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093").
option("subscribe", "transactions_load").
option("security.protocol", "SASL_SSL").
option("sasl.mechanism", "PLAIN").
option("sasl.jaas.config", "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"*****\" password=\"*****\";").
option("ssl.protocol", "TLSv1.2").
option("ssl.enabled.protocols", "TLSv1.2").
option("ssl.endpoint.identification.algorithm", "HTTPS").
option("auto.offset.reset","earliest").
option("group.id", System.currentTimeMillis).
load()
val query = df.writeStream.format("console").start()
I'm starting the spark shell with:
~/spark-2.2.1-bin-hadoop2.7$ ./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0
However, I'm getting a disconnected error:
scala> 18/01/09 08:34:17 WARN NetworkClient: Bootstrap broker kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
18/01/09 08:34:17 WARN NetworkClient: Bootstrap broker kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
18/01/09 08:34:17 WARN NetworkClient: Bootstrap broker kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
<<repeats forever>>
I've increased debug output with sc.setLogLevel("DEBUG") and I get:
<<log output ommitted for brevity>>
18/01/09 08:38:28 DEBUG SessionState: SessionState user: null
18/01/09 08:38:28 DEBUG SessionState: HDFS root scratch dir: /tmp/hive with schema null, permission: rwx-wx-wx
18/01/09 08:38:28 INFO SessionState: Created local directory: /tmp/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9_resources
18/01/09 08:38:28 INFO SessionState: Created HDFS directory: /tmp/hive/snowch/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9
18/01/09 08:38:28 INFO SessionState: Created local directory: /tmp/snowch/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9
18/01/09 08:38:28 INFO SessionState: Created HDFS directory: /tmp/hive/snowch/2b4557ce-dd17-46d1-9ab0-f9a36fd750f9/_tmp_space.db
18/01/09 08:38:28 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/home/snowch/spark-2.2.1-bin-hadoop2.7/spark-warehouse
18/01/09 08:38:28 DEBUG SessionState: Session is using authorization class class org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider
18/01/09 08:38:28 DEBUG StateStoreCoordinatorRef: Retrieved existing StateStoreCoordinator endpoint
18/01/09 08:38:28 DEBUG StreamExecution: Starting Trigger Calculation
18/01/09 08:38:28 INFO StreamExecution: Starting new streaming query.
18/01/09 08:38:28 DEBUG UserGroupInformation: PrivilegedAction as:snowch (auth:SIMPLE) from:org.apache.hadoop.fs.FileContext.getAbstractFileSystem(FileContext.java:331)
18/01/09 08:38:28 DEBUG KafkaSource$$anon$1: Unable to find batch /tmp/temporary-fb3e6e1a-fbbe-4098-991c-0b29f63ecade/sources/0/0
18/01/09 08:38:28 DEBUG AbstractCoordinator: Sending coordinator request for group spark-kafka-source-9a14bb54-8f1b-47db-8497-19c083128496--998588290-driver-0 to broker kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093 (id: -5 rack: null)
18/01/09 08:38:28 DEBUG NetworkClient: Initiating connection to node -5 at kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093.
18/01/09 08:38:28 DEBUG NetworkClient: Initialize connection to node -3 for sending metadata request
18/01/09 08:38:28 DEBUG NetworkClient: Initiating connection to node -3 at kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093.
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--5.bytes-sent
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--5.bytes-received
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--5.latency
18/01/09 08:38:28 DEBUG NetworkClient: Completed connection to node -5
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--3.bytes-sent
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--3.bytes-received
18/01/09 08:38:28 DEBUG Metrics: Added sensor with name node--3.latency
18/01/09 08:38:28 DEBUG NetworkClient: Completed connection to node -3
18/01/09 08:38:28 DEBUG NetworkClient: Sending metadata request {topics=[transactions_load]} to node -5
18/01/09 08:38:28 DEBUG Selector: Connection with kafka05-prod02.messagehub.services.eu-gb.bluemix.net/159.8.179.153 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:154)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:135)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:323)
at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:179)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:974)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1$$anonfun$apply$9.apply(KafkaOffsetReader.scala:174)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1$$anonfun$apply$9.apply(KafkaOffsetReader.scala:172)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt$1.apply$mcV$sp(KafkaOffsetReader.scala:263)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt$1.apply(KafkaOffsetReader.scala:262)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt$1.apply(KafkaOffsetReader.scala:262)
at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:85)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.org$apache$spark$sql$kafka010$KafkaOffsetReader$$withRetriesWithoutInterrupt(KafkaOffsetReader.scala:261)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1.apply(KafkaOffsetReader.scala:172)
at org.apache.spark.sql.kafka010.KafkaOffsetReader$$anonfun$fetchLatestOffsets$1.apply(KafkaOffsetReader.scala:172)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.runUninterruptibly(KafkaOffsetReader.scala:230)
at org.apache.spark.sql.kafka010.KafkaOffsetReader.fetchLatestOffsets(KafkaOffsetReader.scala:171)
at org.apache.spark.sql.kafka010.KafkaSource$$anonfun$initialPartitionOffsets$1.apply(KafkaSource.scala:132)
at org.apache.spark.sql.kafka010.KafkaSource$$anonfun$initialPartitionOffsets$1.apply(KafkaSource.scala:129)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.kafka010.KafkaSource.initialPartitionOffsets$lzycompute(KafkaSource.scala:129)
at org.apache.spark.sql.kafka010.KafkaSource.initialPartitionOffsets(KafkaSource.scala:97)
at org.apache.spark.sql.kafka010.KafkaSource.getOffset(KafkaSource.scala:163)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10$$anonfun$apply$6.apply(StreamExecution.scala:521)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10$$anonfun$apply$6.apply(StreamExecution.scala:521)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10.apply(StreamExecution.scala:520)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$10.apply(StreamExecution.scala:518)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$constructNextBatch(StreamExecution.scala:518)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$populateStartOffsets(StreamExecution.scala:492)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(StreamExecution.scala:297)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1$$anonfun$apply$mcZ$sp$1.apply(StreamExecution.scala:294)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:279)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches$1.apply$mcZ$sp(StreamExecution.scala:294)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:290)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:206)
18/01/09 08:38:28 DEBUG NetworkClient: Node -5 disconnected.
18/01/09 08:38:28 WARN NetworkClient: Bootstrap broker kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093 disconnected
18/01/09 08:38:28 DEBUG ConsumerNetworkClient: Cancelled GROUP_COORDINATOR request ClientRequest(expectResponse=true, callback=org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler#4eacac4e, request=RequestSend(header={api_key=10,api_version=0,correlation_id=0,client_id=consumer-1}, body={group_id=spark-kafka-source-9a14bb54-8f1b-47db-8497-19c083128496--998588290-driver-0}), createdTimeMs=1515487108479, sendTimeMs=1515487108598) with correlation id 0 due to node -5 being disconnected
18/01/09 08:38:28 DEBUG NetworkClient: Sending metadata request {topics=[transactions_load]} to node -3
18/01/09 08:38:28 DEBUG Selector: Connection with kafka01-prod02.messagehub.services.eu-gb.bluemix.net/159.8.179.149 disconnected
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
<<repeated>>
I have seen the following similar questions:
Kafka Error in I/O java.io.EOFException: null - however, this question is for a much older version of Kafka
I understand that some of the output is just 'noise', however, my spark streaming application does not appear to be receiving any data. I have connected with a console client and I am able to see data.
Update 1 - I've tried configuring JaaS, but still getting the same error. The issue may be that the JaaS code needs to run on each worker node, but isn't getting run on them.
sc.setLogLevel("DEBUG")
def jaasClientConfig(username: String, password: String): Unit = {
import javax.security.auth.login.AppConfigurationEntry
import javax.security.auth.login.Configuration
import javax.security.auth.login.LoginException
import scala.collection.JavaConversions._
System.setProperty("java.security.auth.login.config", "")
Configuration.setConfiguration(new Configuration() {
def getAppConfigurationEntry(name: String): Array[AppConfigurationEntry] = {
val idMap = Map(
"serviceName" -> "kafka",
"username" -> username,
"password" -> password
)
val ace = new AppConfigurationEntry(
"org.apache.kafka.common.security.plain.PlainLoginModule",
AppConfigurationEntry.LoginModuleControlFlag.REQUIRED,
idMap
)
return Array(ace)
}
})
}
def startSparkStreaming(): Unit = {
import org.apache.spark.sql.functions._
import org.apache.spark.sql.SparkSession
val spark = SparkSession.builder.appName("StreamingRetailTransactions").getOrCreate()
import spark.implicits._
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka04-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093").
option("subscribe", "transactions_load").
option("security.protocol", "SASL_SSL").
option("sasl.mechanism", "PLAIN").
option("ssl.protocol", "TLSv1.2").
option("ssl.enabled.protocols", "TLSv1.2").
option("ssl.endpoint.identification.algorithm", "HTTPS").
option("auto.offset.reset","earliest").
option("group.id", System.currentTimeMillis).
load()
val query = df.writeStream.format("console").start()
}
jaasClientConfig("****","****")
startSparkStreaming()
Update 2
I've also tried with a jaas.conf:
~/spark-2.2.1-bin-hadoop2.7$ ./bin/spark-shell --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.0 --conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.conf=jaas.conf" --files "jaas.conf"
and ...
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="*****"
password="*****";
}
Still the same problem ...
First I needed to run spark-shell with --conf that points the executor and driver to my jaas.conf:
./bin/spark-shell --master local[1] \
--jars external/kafka-0-10-sql/target/spark-sql-kafka-0-10_2.11-2.2.2-SNAPSHOT.jar,external/kafka-0-10-assembly/target/spark-streaming-kafka-0-10-assembly_2.11-2.2.2-SNAPSHOT.jar \
--conf "spark.driver.extraJavaOptions=-Djava.security.auth.login.config=/path/to/jaas.conf" \
--conf "spark.executor.extraJavaOptions=-Djava.security.auth.login.config=/path/to/jaas.conf" \
--num-executors 1 --executor-cores 1
Next I had to add some kafka options:
val df = spark.readStream.
format("kafka").
option("kafka.bootstrap.servers", "kafka03-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka04-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka01-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka02-prod02.messagehub.services.eu-gb.bluemix.net:9093,kafka05-prod02.messagehub.services.eu-gb.bluemix.net:9093").
option("subscribe", "transactions_load").
option("kafka.security.protocol", "SASL_SSL").
option("kafka.sasl.mechanism", "PLAIN").
option("kafka.ssl.protocol", "TLSv1.2").
option("kafka.ssl.enabled.protocols", "TLSv1.2").
load()
Note that the kafka options need to be prefixed with kafka., for example:
security.protocol => kafka.security.protocol
These changes solved the connectivity issue for me.
This appears to be a failed authentication. At the current MH version of Kafka the server just closes the connection when authentication fails
the "sasl.jaas.config" setting is used in the Kafka client from version 0.10.2
the Kafka client used by Spark 2.2.1 is 0.10.0 so authentication fails as suspected.
you can use the java.security.auth.login.config system property to specify a jaas file
Alternatively you can programmatically set the credentials for the client with a snippet like this
public static void jaasClientConfig(final String username, final String password) throws Exception {
System.setProperty("java.security.auth.login.config", "");
Configuration.setConfiguration(new Configuration() {
public AppConfigurationEntry[] getAppConfigurationEntry(String name) {
HashMap<String, String> idMap = new HashMap<>();
idMap.put("serviceName", "kafka"); // Seems to be optional
idMap.put("username", username);
idMap.put("password", password);
AppConfigurationEntry ace = new AppConfigurationEntry("org.apache.kafka.common.security.plain.PlainLoginModule",
AppConfigurationEntry.LoginModuleControlFlag.REQUIRED, idMap);
AppConfigurationEntry[] entry = { ace };
return entry;
}
});
}

Connection pool is not initialized in unit test with scalikejdbc 2.4.1

I have a problem when running unit test using specs2with scalikejdbc 2.4.1, scalikejdbc-config2.4.1
Here is my code:
object PostDAOImplSpec extends Specification{
sequential
DBs.setupAll
implicit val session = AutoSession
"resolveAll shoudn't have any syntax error" in new AutoRollback {
val postIds = DB readOnly { implicit session =>
sql"select post_id from posts".map(_.long(1)).list.apply()
}
}
DBs.closeAll()
}
Here is logs:
09:11:16.931 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://localhost/bbs, user:root) using factory : <default>
09:11:17.130 [main] DEBUG scalikejdbc.ConnectionPool$ - Registered connection pool : ConnectionPool(url:jdbc:mysql://localhost/bbs, user:root) using factory : <default>
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
java.lang.IllegalStateException: Connection pool is not yet initialized.(name:'default)
As you can see from the first two of lines, scalikejdbc found database's configuration, but it can't initilize connection pool.
Do you have any idea? Thanks.
The DBs.closeAll() closes your connection pools before running your tests.

Unknown UID warning when passing message to remote actor

Here is the code:
package com.packt.akka
import akka.actor.{Actor, ActorRef, ActorSystem, Props}
import com.typesafe.config.ConfigFactory
object MembersService {
val config = ConfigFactory.load.getConfig("MembersService")
val system = ActorSystem("MembersService", config)
val worker = system.actorOf(Props[Worker], "remote-worker")
println(s"Worker actor path is ${worker.path}")
}
object MemberServiceLookup {
val config = ConfigFactory.load.getConfig("MemberServiceLookup")
val system = ActorSystem("MemberServiceLookup", config)
val worker = system.actorSelection("akka.tcp://MembersService#127.0.0.1:2552/user/remote-worker")
worker ! Worker.Work("Hi Remote Actor")
}
object MembersServiceRemoteCreation extends App {
val config = ConfigFactory.load.getConfig("MembersServiceRemoteCreation")
val system = ActorSystem("MembersServiceRemoteCreation", config)
val workerActor = system.actorOf(Props[Worker], "workerActorRemote")
println(s"The remote path of worker Actor is ${workerActor.path}")
workerActor ! Worker.Work("Hi Remote Worker")
}
class Worker extends Actor {
import Worker._
def receive = {
case msg: Work =>
println(s"I received Work Message and My ActorRef: ${self}")
}
}
object Worker {
case class Work(message: String)
}
application.conf:
MembersService {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
}
MemberServiceLookup {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2553
}
}
}
}
MembersServiceRemoteCreation {
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/workerActorRemote {
remote: "akka.tcp://MembersService#127.0.0.1:2552"
}
}
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2558
}
}
}
}
build.sbt:
name := "Hotswap Behavior"
version := "1.0"
scalaVersion := "2.11.7"
sbtVersion := "0.13.11"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-actor" % "2.4.0",
"com.typesafe.akka" %% "akka-remote" % "2.4.0")
Running the app gives the following output (and quickly heats up the CPU):
/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/bin/java -Didea.launcher.port=7533 "-Didea.launcher.bin.path=/Applications/IntelliJ IDEA CE.app/Contents/bin" -Dfile.encoding=UTF-8 -classpath "/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/cldrdata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/jaccess.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/nashorn.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jfxswt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/packager.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.8.0_72.jdk/Contents/Home/lib/tools.jar:/Users/kaiyin/IdeaProjects/Learning Akka (video) code/section5/akka-remoting/target/scala-2.11/classes:/Users/kaiyin/.ivy2/cache/org.uncommons.maths/uncommons-maths/jars/uncommons-maths-1.2.2a.jar:/Users/kaiyin/.ivy2/cache/com.typesafe/config/bundles/config-1.3.0.jar:/Users/kaiyin/.ivy2/cache/com.typesafe.akka/akka-actor_2.11/jars/akka-actor_2.11-2.4.0.jar:/Users/kaiyin/.ivy2/cache/com.typesafe.akka/akka-protobuf_2.11/jars/akka-protobuf_2.11-2.4.0.jar:/Users/kaiyin/.ivy2/cache/com.typesafe.akka/akka-remote_2.11/jars/akka-remote_2.11-2.4.0.jar:/Users/kaiyin/.ivy2/cache/io.netty/netty/bundles/netty-3.10.3.Final.jar:/Users/kaiyin/.ivy2/cache/org.scala-lang/scala-library/jars/scala-library-2.11.7.jar:/Applications/IntelliJ IDEA CE.app/Contents/lib/idea_rt.jar" com.intellij.rt.execution.application.AppMain com.packt.akka.MembersServiceRemoteCreation
[INFO] [04/05/2016 16:54:43.982] [main] [akka.remote.Remoting] Starting remoting
[INFO] [04/05/2016 16:54:44.143] [main] [akka.remote.Remoting] Remoting started; listening on addresses :[akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558]
[INFO] [04/05/2016 16:54:44.144] [main] [akka.remote.Remoting] Remoting now listens on addresses: [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558]
The remote path of worker Actor is akka.tcp://MembersService#127.0.0.1:2552/remote/akka.tcp/MembersServiceRemoteCreation#127.0.0.1:2558/user/workerActorRemote
[WARN] [04/05/2016 16:54:44.262] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-6] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FMembersService%40127.0.0.1%3A2552-0] Association with remote system [akka.tcp://MembersService#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://MembersService#127.0.0.1:2552]] Caused by: [Connection refused: /127.0.0.1:2552]
[INFO] [04/05/2016 16:54:44.266] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.DaemonMsgCreate] from Actor[akka://MembersServiceRemoteCreation/deadLetters] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:44.266] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [com.packt.akka.Worker$Work] from Actor[akka://MembersServiceRemoteCreation/deadLetters] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:45.165] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:46.162] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [4] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:47.164] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [5] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:48.163] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [6] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:49.162] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[WARN] [04/05/2016 16:54:49.285] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-5] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FMembersService%40127.0.0.1%3A2552-0] Association with remote system [akka.tcp://MembersService#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://MembersService#127.0.0.1:2552]] Caused by: [Connection refused: /127.0.0.1:2552]
[INFO] [04/05/2016 16:54:50.161] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [8] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:51.161] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-3] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [9] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/05/2016 16:54:52.171] [MembersServiceRemoteCreation-akka.actor.default-dispatcher-2] [akka://MembersServiceRemoteCreation/deadLetters] Message [akka.remote.RemoteWatcher$Heartbeat$] from Actor[akka://MembersServiceRemoteCreation/system/remote-watcher#-1480050878] to Actor[akka://MembersServiceRemoteCreation/deadLetters] was not delivered. [10] dead letters encountered, no more dead letters will be logged. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[WARN] [04/05/2016 16:54:54.304] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-5] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FMembersService%40127.0.0.1%3A2552-0] Association with remote system [akka.tcp://MembersService#127.0.0.1:2552] has failed, address is now gated for [5000] ms. Reason: [Association failed with [akka.tcp://MembersService#127.0.0.1:2552]] Caused by: [Connection refused: /127.0.0.1:2552]
[WARN] [04/05/2016 16:54:59.171] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-6] [akka.tcp://MembersServiceRemoteCreation#127.0.0.1:2558/system/remote-watcher] Detected unreachable: [akka.tcp://MembersService#127.0.0.1:2552]
[WARN] [04/05/2016 16:54:59.171] [MembersServiceRemoteCreation-akka.remote.default-remote-dispatcher-5] [akka.remote.Remoting] Association to [akka.tcp://MembersService#127.0.0.1:2552] with unknown UID is reported as quarantined, but address cannot be quarantined without knowing the UID, gating instead for 5000 ms.
Any idea what went wrong here?
Looking at the config ,the port config is wrong.
Caused by: [Connection refused: /127.0.0.1:2552]
Ensure you have a Akka node listening on that port.

Akka Remote Actors- : can not able to establish a connection between local and remote actors

Hi I am following this tutorial and I copy pasted the code as it is except I changed the port from 5150 to 2552 and I am facing thease errors
HelloLocal project errors
[warn] there were 1 deprecation warning(s); re-run with -deprecation for details
[warn] one warning found
[info] Running HelloLocal
[INFO] [11/04/2014 22:37:50.707] [run-main-0] [Remoting] Starting remoting
[INFO] [11/04/2014 22:37:51.857] [run-main-0] [Remoting] Remoting started; listening on addresses :[akka.tcp://LocalSystem#127.0.1.1:2552]
[INFO] [11/04/2014 22:37:51.863] [run-main-0] [Remoting] Remoting now listens on addresses: [akka.tcp://LocalSystem#127.0.1.1:2552]
[ERROR] [11/04/2014 22:37:51.904] [LocalSystem-akka.actor.default-dispatcher-3] [RemoteActorRefProvider] Error while looking up address [akka://HelloRemoteSystem#127.0.0.1:2552]
akka.remote.RemoteTransportException: No transport is loaded for protocol: [akka], available protocols: [akka.tcp]
at akka.remote.Remoting$.localAddressForRemote(Remoting.scala:88)
at akka.remote.Remoting.localAddressForRemote(Remoting.scala:130)
at akka.remote.RemoteActorRefProvider.actorFor(RemoteActorRefProvider.scala:321)
at akka.actor.ActorRefFactory$class.actorFor(ActorRefProvider.scala:258)
at akka.actor.ActorCell.actorFor(ActorCell.scala:369)
at LocalActor.<init>(HelloLocal.scala:7)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at akka.util.Reflect$.instantiate(Reflect.scala:45)
at akka.actor.NoArgsReflectConstructor.produce(Props.scala:361)
at akka.actor.Props.newActor(Props.scala:252)
at akka.actor.ActorCell.newActor(ActorCell.scala:552)
at akka.actor.ActorCell.create(ActorCell.scala:578)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[INFO] [11/04/2014 22:37:51.913] [LocalSystem-akka.actor.default-dispatcher-5] [akka://HelloRemoteSystem#127.0.0.1:2552/user/RemoteActor] Message [java.lang.String] from Actor[akka://LocalSystem/user/LocalActor#543076206] to Actor[akka://HelloRemoteSystem#127.0.0.1:2552/user/RemoteActor] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
HelloRemote project errors are:
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [11/04/2014 22:38:40.915] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#-1308265074] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
I am new in akka and for learning purpose I am following this tutorial and now there are errors please help me to solve them
edit after following the akka-remitng 2.3.6 now i am having different errors
aplication.conf (helloLocal)
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
}
HelloLoca.scala
import akka.actor._
class LocalActor extends Actor {
//create the remote actor
val remote = context.actorSelection("akka.tcp://HelloRemoteSystem#127.0.0.1:2552/user/RemoteActor")
var counter = 0
def receive = {
case "START" =>
remote ! "Hello from the LocalActor"
case msg: String =>
println(s"LocalActor received message: '$msg'")
if (counter < 5) {
sender ! "Hello back to you"
counter += 1
}
}
}
object HelloLocal extends App{
implicit val system =ActorSystem("LocalSystem")
val localActor =system.actorOf(Props[LocalActor],name="LocalActor")
localActor ! "START"
}
and HelloRemote application.conf is
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
These are the errors now
HelloLocal
[info] Running HelloLocal
[INFO] [11/05/2014 12:42:32.674] [run-main-1] [Remoting] Starting remoting
[INFO] [11/05/2014 12:42:34.031] [run-main-1] [Remoting] Remoting started; listening on addresses :[akka.tcp://LocalSystem#127.0.0.1:45047]
[INFO] [11/05/2014 12:42:34.040] [run-main-1] [Remoting] Remoting now listens on addresses: [akka.tcp://LocalSystem#127.0.0.1:45047]
[WARN] [11/05/2014 12:42:34.367] [LocalSystem-akka.remote.default-remote-dispatcher-5] [akka.tcp://LocalSystem#127.0.0.1:45047/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FHelloRemoteSystem%40127.0.0.1%3A2552-0/endpointWriter] AssociationError [akka.tcp://LocalSystem#127.0.0.1:45047] -> [akka.tcp://HelloRemoteSystem#127.0.0.1:2552]: Error [Invalid address: akka.tcp://HelloRemoteSystem#127.0.0.1:2552] [
akka.remote.InvalidAssociation: Invalid address: akka.tcp://HelloRemoteSystem#127.0.0.1:2552
Caused by: akka.remote.transport.Transport$InvalidAssociationException: Connection refused: /127.0.0.1:2552
]
[WARN] [11/05/2014 12:42:34.419] [LocalSystem-akka.remote.default-remote-dispatcher-13] [Remoting] Tried to associate with unreachable remote address [akka.tcp://HelloRemoteSystem#127.0.0.1:2552]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: /127.0.0.1:2552
[INFO] [11/05/2014 12:42:34.451] [LocalSystem-akka.actor.default-dispatcher-2] [akka://LocalSystem/deadLetters] Message [java.lang.String] from Actor[akka://LocalSystem/user/LocalActor#1798933307] to Actor[akka://LocalSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
HelloRemote errors are
[info] Running HelloRemote
Remote Actor receive messgage : The remote actor is alive
[INFO] [11/05/2014 12:42:35.654] [HelloRemoteSystem-akka.actor.default-dispatcher-2] [akka://HelloRemoteSystem/deadLetters] Message [java.lang.String] from Actor[akka://HelloRemoteSystem/user/RemoteActor#1925624739] to Actor[akka://HelloRemoteSystem/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
You are probably using a more up to date version of akka than was used in the tutorial.
For 2.2.3 and above your configuration needs to resemble
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
Depending on your version you can find more information at : http://doc.akka.io/docs/akka/2.3.6/scala/remoting.html