I have the following class that creates an actor system and supplies the configuration as a string in code. However, I get the exception that netty could not be started on the default host and port - when I supplied different values to the configs
class RemoteActorSystemCreator extends ActorSystemCreator {
def create(name: String, hostName: String, port: String) = {
val string: Config = ConfigFactory.parseString(
s"""akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "$hostName"
port = $port
}
}
}"""
)
ActorSystem.create(name, ConfigFactory.load(string))
}
}
org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.1.1:2552
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:298)
at akka.remote.netty.NettyRemoteServer.start(Server.scala:51)
at akka.remote.netty.NettyRemoteTransport.start(NettyRemoteSupport.scala:181)
org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.1.1:2552
You are using Akka version 2.1.x, but the configuration is in 2.2 format.
For 2.1.4 the configuration property for the port is akka.remote.netty.port.
Related
I have a Mongo DB Docker container running on 192.168.0.229. From another computer, I can access it via:
> mongo "mongodb://192.168.0.229:27017/test"
But when I add that configuration string (host="192.168.0.229") to my Play Framework app, I get a timeout error:
[debug] application - Login Form Success: UserData(larry#gmail.com,testPW)
[error] p.a.h.DefaultHttpErrorHandler -
! #7m7kggikl - Internal server error, for (POST) [/] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Future timed out after [30 seconds]]]
In the past, the connection was successful with host="localhost" and even an Atlas cluster (host="mycluster.gepld.mongodb.net") for the hostname, so there were no problems connecting previously with the same code. For some reason, Play Framework does not want to connect to this endpoint!
Could it be because the hostname is an IP address? Or, maybe Play/ Akka is doing something under the covers to stop the connection (or something to make Mongo/Docker refuse to accept it?)?
I'm using this driver:
"org.mongodb.scala" %% "mongo-scala-driver" % "4.4.0"
Perhaps I should switch to the Reactive Scala Driver? Any help would be appreciated.
Clarifications:
The Mongo DB Docker container is running on a linux machine on my local network. This container is reachable from within my local network at 192.168.0.229. The goal is to set my Play Framework app configuration to point to the DB at this address, so that as long as the Docker container is running, I can develop from any computer on my local network. Currently, I am able to access the container through the mongo shell on any computer:
> mongo "mongodb://192.168.0.229:27017/test"
I have a Play Framework app with the following in the Application.conf:
datastore {
# Dev
host: "192.168.0.229"
port: 27017
dbname: "test"
user: ""
password: ""
}
This data is used in a connection helper file called DataStore.scala:
package model.db.mongo
import org.mongodb.scala._
import utils.config.AppConfiguration
trait DataStore extends AppConfiguration {
lazy val dbHost = config.getString("datastore.host")
lazy val dbPort = config.getInt("datastore.port")
lazy val dbUser = getConfigString("datastore.user", "")
lazy val dbName = getConfigString("datastore.dbname", "")
lazy val dbPasswd = getConfigString("datastore.password", "")
//MongoDB Atlas Method (Localhost if DB User is empty)
val uri: String = s"mongodb+srv://$dbUser:$dbPasswd#$dbHost/$dbName?retryWrites=true&w=majority"
//val uri: String = "mongodb+svr://192.168.0.229:27017/?compressors=disabled&gssapiServiceName=mongodb"
System.setProperty("org.mongodb.async.type", "netty")
val mongoClient: MongoClient = if (getConfigString("datastore.user", "").isEmpty()) MongoClient() else MongoClient(uri)
print(mongoClient.toString)
print(mongoClient.listDatabaseNames())
val database: MongoDatabase = mongoClient.getDatabase(dbName)
def close = mongoClient.close() //Do this when logging out
}
When you start the app, you open localhost:9000 which is simply a login form. When you fill out the data that corresponds with the data in the users collection, the Play app times out:
[error] p.a.h.DefaultHttpErrorHandler -
! #7m884abc4 - Internal server error, for (POST) [/] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Future timed out after [30 seconds]]]
at play.api.http.HttpErrorHandlerExceptions$.$anonfun$convertToPlayException$2(HttpErrorHandler.scala:381)
at scala.Option.map(Option.scala:242)
at play.api.http.HttpErrorHandlerExceptions$.convertToPlayException(HttpErrorHandler.scala:380)
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:373)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:264)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:430)
at play.core.server.AkkaHttpServer$$anonfun$2.applyOrElse(AkkaHttpServer.scala:422)
at scala.concurrent.impl.Promise$Transformation.run(Promise.scala:454)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:63)
at akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:100)
Caused by: java.util.concurrent.TimeoutException: Future timed out after [30 seconds]
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait0(Promise.scala:212)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:225)
at scala.concurrent.Await$.$anonfun$result$1(package.scala:201)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:174)
at java.base/java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3118)
at akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:172)
at akka.dispatch.BatchingExecutor$BlockableBatch.blockOn(BatchingExecutor.scala:116)
at scala.concurrent.Await$.result(package.scala:124)
at model.db.mongo.DataHelpers$ImplicitObservable.headResult(DataHelpers.scala:27)
at model.db.mongo.DataHelpers$ImplicitObservable.headResult$(DataHelpers.scala:27)
The call to the Users collection is defined in UserAccounts.scala:
case class UserAccount(_id: String, fullname: String, username: String, password: String)
object UserAccount extends DataStore {
val logger: Logger = Logger("database")
//Required for using Case Classes
val codecRegistry = fromRegistries(fromProviders(classOf[UserAccount]), DEFAULT_CODEC_REGISTRY)
//Using Case Class to get a collection
val coll: MongoCollection[UserAccount] = database.withCodecRegistry(codecRegistry).getCollection("users")
//Using Document to get a collection
val listings: MongoCollection[Document] = database.getCollection("users")
def isValidLogin(username: String, password: String): Boolean = {
findUser(username) match {
case Some(u: UserAccount) => if(password.equals(u.password)) { true } else {false }
case None => false
}
}
Just an FYI if anyone runs into this problem. I had a bad line in my DataStore.scala file:
val mongoClient: MongoClient = if (getConfigString("datastore.user", "").isEmpty()) MongoClient() else MongoClient(uri)
Since I was trying to connect without a username (there's no auth on my test db), the above line was saying, "If there's no username, you must be trying to connect to the default MongoClient() location, which is localhost". My mistake.
I simply changed the above line to this:
val mongoClient: MongoClient = MongoClient(uri)
need your help,
I need to connect to AWS Aurora Postgresql using liquibase, it's already configured for local machine, and works fine, but have issues with ssh configuration to it.
I'm using id 'org.hidetake.ssh' version '2.10.1', and id 'org.liquibase.gradle' version '2.0.4'
I'm able to run command directly on host machine, like getting date execute ('date') below, but have no idea why liquibase fails with
Unexpected error running Liquibase: liquibase.exception.DatabaseException: liquibase.exception.DatabaseException: Connection could not be created to jdbc:postgresql://xxxx.rds.amazonaws.com:5432/postgres with driver org.postgresql.Driver. The connection attempt failed.
here is my build.gradle setting:
ssh.settings {
knownHosts = allowAnyHosts
logging = 'stdout'
identity = file("${System.properties['user.home']}/myfolder/.ssh/id_rsa")}
remotes {
dev {
host = 'xxx.xxx.xxx.xxx'
port = 22
user = 'ec2-user'
identity = file("${System.properties['user.home']}/myfolder/.ssh/id_rsa")
}
}
ssh.run {
session(remotes.dev) {
forwardLocalPort port: 5432, hostPort: 5432
execute ('date')
liquibase {
activities {
main {
//changeLogFile changeLog
url 'jdbc:postgresql://xxxx.rds.amazonaws.com:5432/postgres'
username feedSqlUserDev
password feedSqlUserPasswordDev
logLevel 'debug'
}
}
}
}
}
Could you please help me with it, what am I doing wrong?
Also had to connect to SSH bastion host before running liquibase updates. My solution is based on https://github.com/int128/gradle-ssh-plugin/issues/246 answer by the plugin author.
Here is my setup:
ssh.settings {
knownHosts = allowAnyHosts
logging = 'stdout'
identity = file("${System.properties['user.home']}/.ssh/id_rsa")
}
remotes {
bastion {
host = '<hostname>'
user = '<username>'
}
}
liquibase {
activities {
main {
changeLogFile '...'
url 'jdbc:postgresql://localhost:5438/***'
username '***'
password '***'
driver 'org.postgresql.Driver'
}
}
}
task('sshTunnelStart') {
doFirst {
project.ext.ready = new CountDownLatch(1)
project.ext.done = new CountDownLatch(1)
Thread.start {
ssh.run {
session(remotes.bastion) {
forwardLocalPort port: 5438,
host: '<real db hostname>',
hostPort: 5432
project.ready.countDown()
project.done.await(5, TimeUnit.MINUTES) // liquibase update timeout
}
}
}
ready.await(10, TimeUnit.SECONDS) // start tunnel timeout
}
}
task('sshTunnelStop') {
doLast {
// teardown tunnel
project.done.countDown()
}
}
update.dependsOn(sshTunnelStart)
update.finalizedBy(sshTunnelStop)
Note that in liquibase config I use localhost:5438 as it is a local port forwarded to the remote. Later the same port is used in forwardLocalPort as a 'port' parameter. 'host' parameter is set to the remote database host, and 'hostPort' is accordingly the database port. The last part of the config adds dependencies between tasks to liquibase update and start/stop the tunnel.
I'm new to akka and wanted to connect two PC using akka remotely just to run some code in both as (2 actors). I had tried the example in akka doc. But what I really do is to add the 2 IP addresses into config file I always get this error?
First machine give me this error:
[info] [ERROR] [11/20/2018 13:58:48.833]
[ClusterSystem-akka.remote.default-remote-dispatcher-6]
[akka.remote.artery.Association(akka://ClusterSystem)] Outbound
control stream to [akka://ClusterSystem#192.168.1.2:2552] failed.
Restarting it. Handshake with [akka://ClusterSystem#192.168.1.2:2552]
did not complete within 20000 ms
(akka.remote.artery.OutboundHandshake$HandshakeTimeoutException:
Handshake with [akka://ClusterSystem#192.168.1.2:2552] did not
complete within 20000 ms)
And second machine:
Exception in thread "main"
akka.remote.RemoteTransportException: Failed to bind TCP to
[192.168.1.3:2552] due to: Bind failed because of
java.net.BindException: Cannot assign requested address: bind
Config file content :
akka {
actor {
provider = cluster
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 0
}
}
cluster {
seed-nodes = [
"akka://ClusterSystem#192.168.1.3:2552",
"akka://ClusterSystem#192.168.1.2:2552"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
auto-down-unreachable-after = 120s
}
}
# Enable metrics extension in akka-cluster-metrics.
akka.extensions=["akka.cluster.metrics.ClusterMetricsExtension"]
# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host.
akka.cluster.metrics.native-library-extract-folder=${user.dir}/target/native
First of all, you don't need to add cluster configuration for AKKA remoting. Both the PCs or nodes should be enabled remoting with a concrete port instead of "0" that way you know which port to connect.
Have below configurations
PC1
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.3"
canonical.port = 19000
}
}
}
PC2
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = tcp
canonical.hostname = "192.168.1.4"
canonical.port = 18000
}
}
}
Use below actor path to connect any actor in remote from PC1 to PC2
akka://<PC2-ActorSystem>#192.168.1.4:18000/user/<actor deployed in PC2>
Use below actor path to connect from PC2 to PC1
akka://<PC2-ActorSystem>#192.168.1.3:19000/user/<actor deployed in PC1>
Port numbers and IP address are samples.
I just stopped running a Scala Spray executable on an EC2 Ubuntu instance to launch a newer version of the app. When I try and run the new executable I get the following error:
ubuntu#ip-172-32-92:~/suredbits-dfs$ ./target/universal/stage/bin/suredbits-dfs
[WARN] [08/14/2015 03:22:30.314] [NflDbApiActorSystemConfig-akka.actor.default-dispatcher-5] [akka://NflDbApiActorSystemConfig/user/IO-HTTP/listener-0] Bind to ec2-52-116-195.us-west-2.compute.amazonaws.com/172.32.92:80 failed
I have checked to make sure port 80 is open and available by running this command:
netstat -anp | grep 80
which doesn't return anything. It seems that my port is open, and Scala Spray is just failing to bind again. Here is how I am attempting to start my server in my executable:
package com.suredbits.dfs
/**
* Created by chris on 8/9/15.
*/
import akka.actor.ActorSystem
import com.github.nfldb.config.{NflDbApiActorSystemConfig, NflDbApiDbConfig}
import com.suredbits.dfs.nfl.scoring.NflPlayerScoringService
import spray.routing.SimpleRoutingApp
object Main extends App with SimpleRoutingApp with NflPlayerScoringService
with NflDbApiDbConfig with NflDbApiActorSystemConfig {
import actorSystem._
/* startServer(interface = "localhost", port = 80) {
path("hello") {
get {
complete {
<h1>Say hello to spray</h1>
}
}
} ~ nflPlayerScoringServiceRoutes
}*/
startServer(interface = "ec2-52-116-195.us-west-2.compute.amazonaws.com", port = 80) {
path("hello") {
get {
complete {
<h1>Say hello to spray</h1>
}
}
} ~ nflPlayerScoringServiceRoutes
}
}
Start the process with sudo:
sudo ./target/universal/stage/bin/suredbits-dfs
Why? You are trying to use a privileged port (a port under 1024). Only root has access to those ports. The clue that tipped me off to this was that you checked netstat and nothing else was on port 80.
I have the following configuration :
cargo {
containerId = deployContainerId
port = jbossManagementPort
deployable {
file = tasks.getByPath(':frontend:war').archivePath
context = 'xxxxxx'
}
remote {
hostname = 'localhost'
username = 'xxxxxxx'
password = 'xxxxxxx'
}
local {
homeDir = file(jbossHome)
timeout = 60000
}
}
When I invoke Gradle with
gradle -PjbossManagementPort=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
The configured port is ignored. It still tries to connect to 9999. I have tried variants, such as
gradle -Pcargo.port=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
And
gradle -Pcargo.jboss.management-native.port=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
But neither has any effect.
How do I tell Cargo to use a different port than the default?
The solution is to use -D for the cargo-property rather than -P:
gradle -Dcargo.jboss.management-native.port=12345 -PdeployContainerId=jboss7x -PjbossHome=/opt/jboss cargoRedeployRemote
A possible solution that you can define in your gradle build to handle this issue.
remote {
//You can define custom cargo properties here
containerProperties {
property 'cargo.jboss.management-native.port', 12345
}
hostname = 'localhost'
username = 'xxxxxxx'
password = 'xxxxxxx'
}