Constructor TransportClient in class TransportClient cannot be accessed - scala

I wrote the following class for indexing documents in ElasticSearch:
import java.net.InetAddress
import com.typesafe.config.ConfigFactory
import org.elasticsearch.client.transport.TransportClient
import org.elasticsearch.common.settings.Settings
import org.elasticsearch.common.transport.InetSocketTransportAddress
import play.api.libs.json.{JsString, JsValue}
/**
* Created by liana on 12/07/16.
*/
class ElasticSearchConnector {
private var transportClient: TransportClient = null
private val host = "localhost"
private val port = 9300
private val cluster = "elasticsearch"
private val indexName = "tests"
private val docType = "test"
def configElasticSearch(): Unit =
{
val settings = Settings.settingsBuilder().put("cluster.name", cluster).build()
transportClient = new TransportClient(settings)
transportClient.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port.toInt))
}
def putText(json: String, id: Int): String =
{
val response = transportClient.prepareIndex(indexName, docType, id)
.setSource(json)
.get()
val responseId = response.getId
responseId
}
}
Then I use it as follows:
val json = """val jsonString =
{
"title": "Elastic",
"price": 2000,
"author":{
"first": "Zachary",
"last": "Tong";
}
}"""
val ec = new ElasticSearchConnector()
ec.configElasticSearch()
val id = ec.putText(json)
System.out.println(id)
This is the error message I got:
Error:(28, 23) constructor TransportClient in class TransportClient
cannot be accessed in class ElasticSearchConnector
transportClient = new TransportClient(settings)
What is wrong here?

In the Elasticsearch Connector API the TransportClient class has no public constructor, but one declared private constructor. Thus you cannot "new" up an instance of the TransportClient directly. The API utilizes the Builder Pattern fairly heavily so in order to create an instance of the TransportClient you will need to do something like:
val settings = Settings.settingsBuilder().put("cluster.name", cluster).build()
val transportClient = TransportClient.builder().settings(settings).build()
transportClient.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port.toInt))

Related

Why does my Alpakka SFTP connector never connect?

I'm new to Alpakka/Akka Streams and I'm trying to set up a stream where I stream data between two SFTP servers with my system in the middle, here's the code.
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import akka.stream.alpakka.ftp.scaladsl.Sftp
import akka.stream.alpakka.ftp.{FtpCredentials, SftpSettings}
import akka.stream.scaladsl.Keep
import net.schmizz.sshj.{DefaultConfig, SSHClient}
import java.net.InetAddress
class StreamingSftpTransport {
implicit val system: ActorSystem = ActorSystem("dr-service")
implicit val materializer: ActorMaterializer = ActorMaterializer()
private val PORT = 22
private val USER = "testsftp"
private val CREDENTIALS = FtpCredentials.create(USER, "t3st123")
private val BASEPATH = s"/home/$USER"
private val FILE_NAME = "testfile"
// Set up the source system connection
private val SOURCE_HOSTNAME = "host1"
private val sourceSettings = SftpSettings.apply(host = InetAddress.getByName(SOURCE_HOSTNAME))
.withCredentials(CREDENTIALS)
.withPort(22)
private val sourceClient = new SSHClient(new DefaultConfig)
private val configuredSourceClient = Sftp(sourceClient)
// Set up the destination system connection
private val DEST_HOSTNAME = "host2"
private val destSettings = SftpSettings.apply(host = InetAddress.getByName(DEST_HOSTNAME))
.withCredentials(CREDENTIALS)
.withPort(22)
private val destClient = new SSHClient(new DefaultConfig)
private val configuredDestClient = Sftp(destClient)
/**
* Execute the stream from host1 to host2
*/
def doTransfer(): Unit = {
val source = configuredSourceClient.fromPath(s"$BASEPATH/$FILE_NAME", sourceSettings)
val sink = configuredDestClient.toPath(s"$BASEPATH/$FILE_NAME", destSettings)
val runnable = source.toMat(sink)(Keep.right).run()
}
}
I've called this from a unit test with new StreamingSftpTransport.doTransfer() but it never attempts to connect. What am I doing wrong?
As suggested by artur in the comment on my question, I wasn't blocking on the future so the JVM was exiting before the connection could be established.
Adding the following line allowed the connections to be established
Await.result(runnable, 180 seconds)
PS: Don't do this in production :)

How to bind Slick dependency with Lagom?

So, I have this dependency which is used to create tables and interact with Postgres. Here is a Sample Class:
class ConfigTable {
this: DBFactory =>
import driver.api._
implicit val configKeyMapper = MappedColumnType.base[ConfigKey, String](e => e.toString, s => ConfigKey.withName(s))
val configs = TableQuery[ConfigMapping]
class ConfigMapping(tag: Tag) extends Table[Config](tag, "configs") {
def key = column[ConfigKey]("key")
def value = column[String]("value")
def * = (key, value) <> (Config.tupled, Config.unapply _)
}
/**
* add config
*
* #param config
* #return
*/
def add(config: Config): Try[Config] = try {
sync(db.run(configs += config)) match {
case 1 => Success(config)
case _ => Failure(new Exception("Unable to add config"))
}
} catch {
case ex: PSQLException =>
if (ex.getMessage.contains("duplicate key value")) Failure(new Exception("alt id already exists."))
else Failure(new Exception(ex.getMessage))
}
def get(key: ConfigKey): Option[Config] = sync(db.run(configs.filter(x => x.key === key).result)).headOption
def getAll(): Seq[Config] = sync(db.run(configs.result))
}
object ConfigTable extends ConfigTable with PSQLComponent
PSQLComponent is the Abstraction for Database meta configuration:
import slick.jdbc.PostgresProfile
trait PSQLComponent extends DBFactory {
val driver = PostgresProfile
import driver.api.Database
val db: Database = Database.forConfig("db.default")
}
DBFactory is again an abstraction:
import slick.jdbc.JdbcProfile
trait DBFactory {
val driver: JdbcProfile
import driver.api._
val db: Database
}
application.conf:
db.default {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost:5432/db"
user = "user"
password = "pass"
hikaricp {
minimumIdle = ${db.default.async-executor.minConnections}
maximumPoolSize = ${db.default.async-executor.maxConnections}
}
}
jdbc-defaults.slick.profile = "slick.jdbc.PostgresProfile$"
lagom.persistence.jdbc.create-tables.auto=false
I compile and publish this dependency to nexus and trying to use this in my Lagom Microservice.
Here is the Loader Class:
class SlickExapleAppLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) {
override def serviceLocator: ServiceLocator = NoServiceLocator
}
override def loadDevMode(context: LagomApplicationContext): LagomApplication = new SlickExampleApp(context) with LagomDevModeComponents {
}
override def describeService = Some(readDescriptor[SlickExampleLMSServiceImpl])
}
abstract class SlickExampleApp(context: LagomApplicationContext)
extends LagomApplication(context)
// No Idea which to use and how, nothing clear from doc too.
// with ReadSideJdbcPersistenceComponents
// with ReadSideSlickPersistenceComponents
// with SlickPersistenceComponents
with AhcWSComponents {
wire[SlickExampleScheduler]
}
I'm trying to implement it in this scheduler:
class SlickExampleScheduler #Inject()(lmsService: LMSService,
configuration: Configuration)(implicit ec: ExecutionContext) {
val brofile = `SomeDomainObject`
val gson = new Gson()
val concurrency = Runtime.getRuntime.availableProcessors() * 10
implicit val timeout: Timeout = 3.minute
implicit val system: ActorSystem = ActorSystem("LMSActorSystem")
implicit val materializer: ActorMaterializer = ActorMaterializer()
// Getting Exception Initializer here..... For ConfigTable ===> ExceptionLine
val schedulerImplDao = new SchedulerImplDao(ConfigTable)
def hitLMSAPI = {
println("=============>1")
schedulerImplDao.doSomething()
}
system.scheduler.schedule(2.seconds, 2.seconds) {
println("=============>")
hitLMSAPI
}
}
Not sure if it's the correct way, or if it's not what is the correct way of doing this. It is the project requirement to keep the Data Models separate from the service for the obvious reasons of re-usability.
Exception Stack:
17:50:38.666 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=ForkJoinPool-1-worker-1, akkaTimestamp=12:20:38.665UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - Started up successfully
17:50:38.707 [info] akka.cluster.Cluster(akka://lms-impl-application) [sourceThread=lms-impl-application-akka.actor.default-dispatcher-6, akkaTimestamp=12:20:38.707UTC, akkaSource=akka.cluster.Cluster(akka://lms-impl-application), sourceActorSystem=lms-impl-application] - Cluster Node [akka.tcp://lms-impl-application#127.0.0.1:45805] - No seed-nodes configured, manual cluster join required
java.lang.ExceptionInInitializerError
at com.slick.init.impl.SlickExampleScheduler.<init>(SlickExampleScheduler.scala:29)
at com.slick.init.impl.SlickExampleApp.<init>(SlickExapleAppLoader.scala:42)
at com.slick.init.impl.SlickExapleAppLoader$$anon$2.<init>(SlickExapleAppLoader.scala:17)
at com.slick.init.impl.SlickExapleAppLoader.loadDevMode(SlickExapleAppLoader.scala:17)
at com.lightbend.lagom.scaladsl.server.LagomApplicationLoader.load(LagomApplicationLoader.scala:76)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$5(LagomReloadableDevServerStart.scala:176)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$3(LagomReloadableDevServerStart.scala:173)
at scala.Option.map(Option.scala:163)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$2(LagomReloadableDevServerStart.scala:149)
at scala.util.Success.flatMap(Try.scala:251)
at play.core.server.LagomReloadableDevServerStart$$anon$1.$anonfun$get$1(LagomReloadableDevServerStart.scala:147)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:658)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1402)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: java.lang.NullPointerException
at com.example.db.models.LoginTable.<init>(LoginTable.scala:29)
at com.example.db.models.LoginTable$.<init>(LoginTable.scala:293)
at com.example.db.models.LoginTable$.<clinit>(LoginTable.scala)
... 24 more
This is how it is woking:
abstract class SlickExampleApp(context: LagomApplicationContext) extends LagomApplication(context)
with SlickPersistenceComponents with AhcWSComponents {
override implicit lazy val actorSystem: ActorSystem = ActorSystem("LMSActorSystem")
override lazy val materializer: ActorMaterializer = ActorMaterializer()
override lazy val lagomServer = serverFor[SlickExampleLMSService](wire[SlickExampleLMSServiceImpl])
lazy val externalService = serviceClient.implement[LMSService]
override def connectionPool: ConnectionPool = new HikariCPConnectionPool(environment)
override def jsonSerializerRegistry: JsonSerializerRegistry = new JsonSerializerRegistry {
override def serializers: immutable.Seq[JsonSerializer[_]] = Vector.empty
}
val loginTable = wire[LoginTable]
wire[SlickExampleScheduler]
}
> One thing I'd like to report is: Lagom docs about the application.conf configuration of slick is not correct, it misleaded me for two days, the I digged into the Liberary code and this is how it goes:
private val readSideConfig = system.settings.config.getConfig("lagom.persistence.read-side.jdbc")
private val jdbcConfig = system.settings.config.getConfig("lagom.persistence.jdbc")
private val createTables = jdbcConfig.getConfig("create-tables")
val autoCreateTables: Boolean = createTables.getBoolean("auto")
// users can disable the usage of jndiDbName for userland read-side operations by
// setting the jndiDbName to null. In which case we fallback to slick.db.
// slick.db must be defined otherwise the application will fail to start
val db = {
if (readSideConfig.hasPath("slick.jndiDbName")) {
new InitialContext()
.lookup(readSideConfig.getString("slick.jndiDbName"))
.asInstanceOf[Database]
} else if (readSideConfig.hasPath("slick.db")) {
Database.forConfig("slick.db", readSideConfig)
} else {
throw new RuntimeException("Cannot start because read-side database configuration is missing. " +
"You must define either 'lagom.persistence.read-side.jdbc.slick.jndiDbName' or 'lagom.persistence.read-side.jdbc.slick.db' in your application.conf.")
}
}
val profile = DatabaseConfig.forConfig[JdbcProfile]("slick", readSideConfig).profile
The configuration it requires is very much different than the suggested one on the Doc.

Unable to use Mongo Scala driver with case classes

I'm trying to use the Scala mongo driver with case classes, as described at: http://mongodb.github.io/mongo-scala-driver/2.2/getting-started/quick-tour-case-classes/
However I'm getting the exception:
Can't find a codec for class com.foo.model.User$.
org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.foo.model.User$.
when I try to insert an item.
The case class:
case class User(
_id: ObjectId,
foo: String = "",
foo2: String = "",
foo3: String = "",
first: String,
last: String,
username: String,
pwHash: String = ""
gender: String,
isFoo: Boolean = false)
extends FooTrait
The code:
val providers = fromProviders( classOf[User])
val registry = fromRegistries(providers, DEFAULT_CODEC_REGISTRY)
val connStr = "mongodb://...."
val clusterSettings = ClusterSettings.builder().applyConnectionString(new ConnectionString(connStr)).build()
val clientSettings = MongoClientSettings.builder().codecRegistry(getCodecRegistry).clusterSettings(clusterSettings).build()
val client = MongoClient( clientSettings )
val database: MongoDatabase = client.getDatabase(dbName).withCodecRegistry(registry)
val modelCollection: MongoCollection[User] = db.getCollection("user")
val item = User(.....) //snipped
modelCollection.insertOne(item).toFuture()
Full stack trace:
Can't find a codec for class com.foo.model.User$.
org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.foo.model.User$.
at org.bson.codecs.configuration.CodecCache.getOrThrow(CodecCache.java:46)
at org.bson.codecs.configuration.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:63)
at org.bson.codecs.configuration.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:37)
at com.mongodb.async.client.MongoCollectionImpl.getCodec(MongoCollectionImpl.java:1170)
at com.mongodb.async.client.MongoCollectionImpl.getCodec(MongoCollectionImpl.java:1166)
at com.mongodb.async.client.MongoCollectionImpl.executeInsertOne(MongoCollectionImpl.java:519)
at com.mongodb.async.client.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:501)
at com.mongodb.async.client.MongoCollectionImpl.insertOne(MongoCollectionImpl.java:496)
at org.mongodb.scala.MongoCollection.$anonfun$insertOne$1(MongoCollection.scala:410)
at org.mongodb.scala.MongoCollection.$anonfun$insertOne$1$adapted(MongoCollection.scala:410)
at org.mongodb.scala.internal.ObservableHelper$$anon$2.apply(ObservableHelper.scala:42)
at org.mongodb.scala.internal.ObservableHelper$$anon$2.apply(ObservableHelper.scala:40)
at com.mongodb.async.client.SingleResultCallbackSubscription.requestInitialData(SingleResultCallbackSubscription.java:38)
at com.mongodb.async.client.AbstractSubscription.tryRequestInitialData(AbstractSubscription.java:151)
at com.mongodb.async.client.AbstractSubscription.request(AbstractSubscription.java:82)
at org.mongodb.scala.ObservableImplicits$BoxedSubscription.request(ObservableImplicits.scala:474)
at org.mongodb.scala.ObservableImplicits$ScalaObservable$$anon$2.onSubscribe(ObservableImplicits.scala:373)
at org.mongodb.scala.ObservableImplicits$ToSingleObservable$$anon$3.onSubscribe(ObservableImplicits.scala:440)
at org.mongodb.scala.Observer.onSubscribe(Observer.scala:85)
at org.mongodb.scala.Observer.onSubscribe$(Observer.scala:85)
at org.mongodb.scala.ObservableImplicits$ToSingleObservable$$anon$3.onSubscribe(ObservableImplicits.scala:432)
at com.mongodb.async.client.SingleResultCallbackSubscription.<init>(SingleResultCallbackSubscription.java:33)
at com.mongodb.async.client.Observables$2.subscribe(Observables.java:76)
at org.mongodb.scala.ObservableImplicits$BoxedObservable.subscribe(ObservableImplicits.scala:458)
at org.mongodb.scala.ObservableImplicits$ToSingleObservable.subscribe(ObservableImplicits.scala:432)
at org.mongodb.scala.ObservableImplicits$ScalaObservable.headOption(ObservableImplicits.scala:365)
at org.mongodb.scala.ObservableImplicits$ScalaObservable.head(ObservableImplicits.scala:351)
at org.mongodb.scala.ObservableImplicits$ScalaSingleObservable.toFuture(ObservableImplicits.scala:410)
I think I'm doing everything right - and unless this is a bug, the code should work. My mongo-scala-driver version is 2.2.0.
Any ideas?
Here is a sample that is working on my local box with Mongo 3.6.1.
// ammonite script mongo.sc
import $ivy.`org.mongodb.scala::mongo-scala-driver:2.2.0`
import org.mongodb.scala._
import org.mongodb.scala.connection._
import org.mongodb.scala.bson.ObjectId
import org.mongodb.scala.bson.codecs.Macros._
import org.mongodb.scala.bson.codecs.DEFAULT_CODEC_REGISTRY
import org.bson.codecs.configuration.CodecRegistries.{fromRegistries, fromProviders}
trait FooTrait
case class User(_id: ObjectId,
foo: String = "",
foo2: String = "",
foo3: String = "",
first: String,
last: String,
username: String,
pwHash: String = "",
gender: String,
isFoo: Boolean = false) extends FooTrait
val codecRegistry = fromRegistries(fromProviders(classOf[User]), DEFAULT_CODEC_REGISTRY )
import scala.collection.JavaConverters._
val clusterSettings: ClusterSettings = ClusterSettings.builder().hosts(List(new ServerAddress("localhost")).asJava).build()
val settings: MongoClientSettings = MongoClientSettings.builder().clusterSettings(clusterSettings).build()
val mongoClient: MongoClient = MongoClient(settings)
val database: MongoDatabase = mongoClient.getDatabase("mydb").withCodecRegistry(codecRegistry)
val userCollection: MongoCollection[User] = database.getCollection("user")
val user:User = User(new ObjectId(), "foo", "foo2", "foo3", "first", "last", "username", "pwHash", "gender", true)
import scala.concurrent.duration._
import scala.concurrent.Await
// wait for Mongo to complete insert operation
Await.result(userCollection.insertOne(user).toFuture(),3.seconds)
When you save this snippet into a file mongo.sc, then you can run it with Ammonite using
amm mongo.sc
In case mongo is running on the default port, the mydb database should get created automatically including the new user collection.

Access private field in Companion object

I have a class TestClass with a companion object. How can I access a private field say xyz in the companion object using runtime reflection in scala when that private field is set from within the class as shown below.
class TestClass { TestClass.xyz = 100 }
object TestClass { private var xyz: Int = _ }
I tried the following
import scala.reflect.runtime.{currentMirror, universe => ru}
val testModuleSymbol = ru.typeOf[TestClass.type].termSymbol.asModule
val moduleMirror = currentMirror.reflectModule(testModuleSymbol)
val instanceMirror = currentMirror.reflect(moduleMirror.instance)
val xyzTerm = ru.typeOf[TestClass.type].decl(ru.TermName("xyz")).asTerm.accessed.asTerm
val fieldMirror = instanceMirror.reflectField(xyzTerm)
val context = fieldMirror.get.asInstanceOf[Int]
But I was getting the below error.
scala> val fieldMirror = instanceMirror.reflectField(xyzTerm)
scala.ScalaReflectionException: Scala field xyz of object TestClass isn't represented as a Java field, nor does it have a
Java accessor method. One common reason for this is that it may be a private class parameter
not used outside the primary constructor.
at scala.reflect.runtime.JavaMirrors$JavaMirror.scala$reflect$runtime$JavaMirrors$JavaMirror$$abort(JavaMirrors.scala:115)
at scala.reflect.runtime.JavaMirrors$JavaMirror.scala$reflect$runtime$JavaMirrors$JavaMirror$$ErrorNonExistentField(JavaMirrors.scala:127)
at scala.reflect.runtime.JavaMirrors$JavaMirror$JavaInstanceMirror.reflectField(JavaMirrors.scala:242)
at scala.reflect.runtime.JavaMirrors$JavaMirror$JavaInstanceMirror.reflectField(JavaMirrors.scala:233)
... 29 elided
This exception is thrown only when I refer the variable xyz in the TestClass (ie TestClass.xyz = 100). If this reference is removed from the class than my sample code works just fine.
Got this to work:
import scala.reflect.runtime.universe._
import scala.reflect.runtime.{universe => ru}
val runMirror = ru.runtimeMirror(getClass.getClassLoader)
val objectDef = Class.forName("org.myorg.TestClass")
val objectTypeModule = runMirror.moduleSymbol(objectDef).asModule
val objectType = objectTypeModule.typeSignature
val methodMap = objectType.members
.filter(_.isMethod)
.map(d => {
d.name.toString -> d.asMethod
})
.toMap
// get the scala Object
val instance = runMirror.reflectModule(objectTypeModule).instance
val instanceMirror = runMirror.reflect(instance)
// get the private value
val result = instanceMirror.reflectMethod(methodMap("xyz")).apply()
assert(result == 100)

Mockito class is mocked but returns nothing

I am kinda new with Scala and as said in the title, i am trying to mock a class.
DateServiceTest.scala
#RunWith(classOf[JUnitRunner])
class DateServiceTest extends FunSuite with MockitoSugar {
val conf = new SparkConf().setAppName("Simple Application").setMaster("local")
val sc = new SparkContext(conf)
implicit val sqlc = new SQLContext(sc)
val m = mock[ConfigManager]
when(m.getParameter("dates.traitement")).thenReturn("10")
test("mocking test") {
val instance = new DateService
val date = instance.loadDates
assert(date === new DateTime())
}
}
DateService.scala
class DateService extends Serializable with Logging {
private val configManager = new ConfigManager
private lazy val datesTraitement = configManager.getParameter("dates.traitement").toInt
def loadDates() {
val date = selectFromDatabase(datesTraitement)
}
}
Unfortunately when I run the test, datesTraitement returns null instead of 10, but m.getparameter("dates.traitement") does return 10.
Maybe i am doing some kind of anti pattern somewhere but I don't know where, please keep in mind that I am new with all of this and I didn't find any proper example specific to my case on internet.
Thanks for any help.
I think the issue is your mock is not injected, as you create ConfigManager inline in the DateService class.
Instead of
class DateService extends Serializable with Logging {
private val configManager = new ConfigManager
}
try
class DateService(private val configManager: ConfigManager) extends Serializable with Logging
and in your test case inject the mocked ConfigManager when you construct DateService
class DateServiceTest extends FunSuite with MockitoSugar {
val m = mock[ConfigManager]
val instance = new DateService(m)
}