I'm playing with Mongo database through the Reactive Mongo driver
import org.slf4j.LoggerFactory
import reactivemongo.api.MongoDriver
import reactivemongo.api.collections.default.BSONCollection
import reactivemongo.bson.BSONDocument
import scala.concurrent.Future
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
object Main {
val log = LoggerFactory.getLogger("Main")
def main(args: Array[String]): Unit = {
log.info("Start")
val conn = new MongoDriver().connection(List("localhost"))
val db = conn("test")
log.info("Done")
}
}
My build.sbt file:
lazy val root = (project in file(".")).
settings(
name := "simpleapp",
version := "1.0.0",
scalaVersion := "2.11.4",
libraryDependencies ++= Seq(
"org.reactivemongo" %% "reactivemongo" % "0.10.5.0.akka23",
"ch.qos.logback" % "logback-classic" % "1.1.2"
)
)
When I run: sbt compile run
I get this output:
$ sbt compile run
[success] Total time: 0 s, completed Apr 25, 2015 5:36:51 PM
[info] Running Main
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
17:36:52.328 [run-main-0] INFO Main - Start
17:36:52.333 [run-main-0] INFO Main - Done
And application doesn't stop.... :/
I have to press Ctrl + C to kill it
I've read that MongoDriver() creates ActorSystem so I tried to close connection manually with conn.close() but I get this:
[info] Running Main
ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.
17:42:23.252 [run-main-0] INFO Main - Start
17:42:23.258 [run-main-0] INFO Main - Done
17:42:23.403 [reactivemongo-akka.actor.default-dispatcher-2] ERROR reactivemongo.core.actors.MongoDBSystem - (State: Closing) UNHANDLED MESSAGE: ChannelConnected(-973180998)
[INFO] [04/25/2015 17:42:23.413] [reactivemongo-akka.actor.default-dispatcher-3] [akka://reactivemongo/deadLetters] Message [reactivemongo.core.actors.Closed$] from Actor[akka://reactivemongo/user/$b#-1700211063] to Actor[akka://reactivemongo/deadLetters] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
[INFO] [04/25/2015 17:42:23.414] [reactivemongo-akka.actor.default-dispatcher-3] [akka://reactivemongo/user/$a] Message [reactivemongo.core.actors.Close$] from Actor[akka://reactivemongo/user/$b#-1700211063] to Actor[akka://reactivemongo/user/$a#-1418324178] was not delivered. [2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
And app doesn't exit also
So, what am i doing wrong? I can'f find answer...
And it seems to me that official docs doesn't explain whether i should care about graceful shutdown at all.
I don't have much experience with console apps, i use play framework in my projects but i want to create sub-project that works with mongodb
I see many templates (in activator) such as: Play + Reactive Mongo, Play + Akka + Mongo but there's no Scala + Reactive Mongo that would explain how to work properly :/
I was having the same problem. The solution I found was invoking close on both object, the driver and the connection:
val driver = new MongoDriver
val connection = driver.connection(List("localhost"))
...
connection.close()
driver.close()
If you close only the connection, then the akka system remains alive.
Tested with ReactiveMongo 0.12
This looks like a known issue with Reactive Mongo, see the relevant thread on GitHub
A fix for this was introduced in this pull request #241 by reid-spencer, merged on the 3rd of February 2015
You should be able to fix it by using a newer version. If no release has been made since February, you could try checking out a version that includes this fix and building the code yourself.
As far as I can see, there's no mention of this bugfix in the release notes for version 0.10.5
Bugfixes:
BSON library: fix BSONDateTimeNumberLike typeclass
Cursor: fix exception propagation
Commands: fix ok deserialization for some cases
Commands: fix CollStatsResult
Commands: fix AddToSet in aggregation
Core: fix connection leak in some cases
GenericCollection: do not ignore WriteConcern in save()
GenericCollection: do not ignore WriteConcern in bulk inserts
GridFS: fix uploadDate deserialization field
Indexes: fix parsing for Ascending and Descending
Macros: fix type aliases
Macros: allow custom annotations
The name of the committer does not appear as well:
Here is the list of the commits included in this release (since 0.9, the top commit is the most recent one):
$ git shortlog -s -n refs/tags/v0.10.0..0.10.5.x.akka23
39 Stephane Godbillon
5 Andrey Neverov
4 lucasrpb
3 Faissal Boutaounte
2 杨博 (Yang Bo)
2 Nikolay Sokolov
1 David Liman
1 Maksim Gurtovenko
1 Age Mooij
1 Paulo "JCranky" Siqueira
1 Daniel Armak
1 Viktor Taranenko
1 Vincent Debergue
1 Andrea Lattuada
1 pavel.glushchenko
1 Jacek Laskowski
Looking at the commit history for 0.10.5.0.akka23 (the one you reference in build.sbt), it seems the fix was not merged into it.
Related
I have been using IntelliJ for getting up to speed with developing Spark applications in Scala using sbt. I understand the basics although IntelliJ hides a lot of the scaffolding so I'd like to try getting something up and running from the command-line (i.e. using a REPL). I am using macOS.
Here's what I've done:
mkdir -p ~/tmp/scalasparkrepl
cd !$
echo 'scalaVersion := "2.11.12"' > build.sbt
echo 'libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"' >> build.sbt
echo 'libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.0"' >> build.sbt
echo 'libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.3.0"' >> build.sbt
sbt console
That opens a scala REPL (including downloading all the dependencies) in which I run:
import org.apache.spark.SparkConf
import org.apache.spark.sql.{SparkSession, DataFrame}
val conf = new SparkConf().setMaster("local[*]")
val spark = SparkSession.builder().appName("spark repl").config(conf).config("spark.sql.warehouse.dir", "~/tmp/scalasparkreplhive").enableHiveSupport().getOrCreate()
spark.range(0, 1000).toDF()
which fails with error access denied org.apache.derby.security.SystemPermission( "engine", "usederbyinternals" ):
scala> spark.range(0, 1000).toDF()
18/05/08 11:51:11 INFO SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('~/tmp/scalasparkreplhive').
18/05/08 11:51:11 INFO SharedState: Warehouse path is '/tmp/scalasparkreplhive'.
18/05/08 11:51:12 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
18/05/08 11:51:12 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
18/05/08 11:51:12 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
18/05/08 11:51:12 INFO ObjectStore: ObjectStore, initialize called
18/05/08 11:51:13 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
18/05/08 11:51:13 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored
java.security.AccessControlException: access denied org.apache.derby.security.SystemPermission( "engine", "usederbyinternals" )
I've googled around and there is some information on this error but nothing which I've been able to use to solve it. I find it strange that a scala/sbt project on the command-line would have this problem whereas a sbt project in IntelliJ works fine (I pretty much copied/pasted the code from an IntelliJ project). I guess IntelliJ is doing something on my behalf but I don't know what, that's why I'm undertaking this exercise.
Can anyone advise how to solve this problem?
Not going to take full credit for this, but it looks similar to SBT test does not work for spark test
The solution is to issue this line before running the Scala code:
System.setSecurityManager(null)
So in full:
System.setSecurityManager(null)
import org.apache.spark.SparkConf
import org.apache.spark.sql.{SparkSession, DataFrame}
val conf = new SparkConf().setMaster("local[*]")
val spark = SparkSession.builder().appName("spark repl").config(conf).config("spark.sql.warehouse.dir", "~/tmp/scalasparkreplhive").enableHiveSupport().getOrCreate()
spark.range(0, 1000).toDF()
You can set the permission appropriately, add this to your pre-init script:
export SBT_OPTS="-Djava.security.policy=runtime.policy"
Create a runtime.policy file:
grant codeBase "file:/home/user/.ivy2/cache/org.apache.derby/derby/jars/*" {
permission org.apache.derby.security.SystemPermission "engine", "usederbyinternals";
};
This assumes that your runtime.policy file resides in the current working directory and you're pulling Derby from your locally cached Ivy repository. Change the path to reflect the actual parent folder of the Derby Jar if necessary. The placement of the asterisk is significant, and this is not a traditional shell glob.
See also: https://docs.oracle.com/javase/7/docs/technotes/guides/security/PolicyFiles.html
Sbt seems to be using different classloaders, making some tests failing when run more than once in an sbt session, with the following error:
[info] java.lang.ClassCastException: net.i2p.crypto.eddsa.EdDSAPublicKey cannot be cast to net.i2p.crypto.eddsa.EdDSAPublicKey
[info] at com.advancedtelematic.libtuf.crypt.EdcKeyPair$.generate(RsaKeyPair.scala:120)
I tried equivalent code using pattern matching instead of asInstanceOf and I get the same result.
How can I make sure sbt uses the same class loader for all test executions in the same session?
I think it's related to this: Do security providers cause ClassLoader leaks in Java?. Basically Security is re-using providers from old class-loaders. So this could happen in any multi-classpath environment (like OSGi), not just SBT.
Fix for your build.sbt (without forking):
testOptions in Test += Tests.Cleanup(() =>
java.security.Security.removeProvider("BC"))
Experiment:
sbt-classloader-issue$ sbt
> test
[success] Total time: 1 s, completed Jul 6, 2017 11:43:53 PM
> test
[success] Total time: 0 s, completed Jul 6, 2017 11:43:55 PM
Explanation:
As I can see from your code (published here):
Security.addProvider(new BouncyCastleProvider)
you're reusing the same BouncyCastleProvider provider every-time you run a test, as your Security.addProvider works only first time. As sbt creates new class-loader for every "test" run, but re-uses the same JVM - Security is kind-of JVM-scoped singleton as it was loaded by JVM-bootstrap, so classOf[java.security.Security].getClassLoader() == null and sbt cannot reload/reinitialize this class.
And you can easily check that
classOf[org.bouncycastle.jce.spec.ECParameterSpec].getClassLoader()
res30: ClassLoader = URLClassLoader with NativeCopyLoader with RawResources
org.bouncycastle classes are loaded with custom classloader (from sbt) which changes every-time you run test.
So this code:
val generator = KeyPairGenerator.getInstance("ECDSA", "BC")
gets instance of class loaded from old classloader (the one used for first "test" run) and you're trying to initialize it with spec from new classloader:
generator.initialize(ecSpec)
That's why you're getting "parameter object not a ECParameterSpec" exception. The reasoning around "net.i2p.crypto.eddsa.EdDSAPublicKey cannot be cast to net.i2p.crypto.eddsa.EdDSAPublicKey" is basically same.
I've read up everything I could on SO and the ReactiveMongo community list and I am stumped. I am using ReactiveMongo version 0.12 and am just trying to test it out since I have some other problems.
The code in my scala worksheet is:
import reactivemongo.api.{DefaultDB, MongoConnection, MongoDriver}
import reactivemongo.bson.{
BSONDocumentWriter, BSONDocumentReader, Macros, document
}
import com.typesafe.config.{Config, ConfigFactory}
lazy val conf = ConfigFactory.load()
val driver1 = new reactivemongo.api.MongoDriver
val connection3 = driver1.connection(List("localhost"))
and the error I get is
[NGSession 3: 127.0.0.1: compile-server] INFO reactivemongo.api.MongoDriver - No mongo-async-driver configuration found
com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'akka'
at com.typesafe.config.impl.SimpleConfig.findKey(testMongo.sc:120)
at com.typesafe.config.impl.SimpleConfig.find(testMongo.sc:143)
at com.typesafe.config.impl.SimpleConfig.find(testMongo.sc:155)
at com.typesafe.config.impl.SimpleConfig.find(testMongo.sc:160)
at com.typesafe.config.impl.SimpleConfig.getString(testMongo.sc:202)
at akka.actor.ActorSystem$Settings.<init>(testMongo.sc:165)
at akka.actor.ActorSystemImpl.<init>(testMongo.sc:501)
at akka.actor.ActorSystem$.apply(testMongo.sc:138)
at reactivemongo.api.MongoDriver.<init>(testMongo.sc:879)
at #worksheet#.driver1$lzycompute(testMongo.sc:9)
at #worksheet#.driver1(testMongo.sc:9)
at #worksheet#.get$$instance$$driver1(testMongo.sc:9)
at #worksheet#.#worksheet#(testMongo.sc:30)
My application.conf is in src/main/resources of the sub-project which this worksheet is found and contains this:
mongo-async-driver {
akka {
loglevel = WARNING
}
}
I added the ConfigFactory precisely because I got this error and thought it might help. I looked at the code and that's what ReactiveMongo is doing at this point so I thought perhaps a call here would force it to load at this point. I have moved the application.conf file into every conceivable place including a conf directory (thinking it might require play conventions) and the src/main/resources of the top level directory. Nothing works. So my first question is what am I doing wrong? Where should application.conf file go?
This info message causes my program to crash and driver doesn't get created so I can't move on from here.
Also, I added an akka key to reference.conf just in case - that didnt help either.
I'm a bit confused about the logging system in Play.
Without importing any logging library, I added this to my code:
Logger.debug("Data is: " + data)
It didn't cause a compilation error but at the same time, it didn't print anything in the terminal window where I started the activator(where I typed activator run).
After looking here https://www.playframework.com/documentation/2.5.x/ScalaLogging, I also tried:
val logger = Logger(this.getClass)
logger.debug("Data is: " + data)
However, again nothing is printed.
Why is this happening?
there is few log level You can set in application.conf according documentation.
# Root logger:
logger.root=ERROR
# Logger used by the framework:
logger.play=INFO
# Logger provided to your application:
logger.application=DEBUG
# Logger for a third party library
logger.org.springframework=INFO
Try set log level to debug in Your application.conf
Currently there is an issue in the default configuration for the logger in DEV mode https://github.com/playframework/playframework/issues/5842
The default level for applications is INFO so debug messages are not shown.
While that issue is not fixed the workaround is to override logback.xml
Following the example in https://www.playframework.com/documentation/2.5.x/SettingsLogger that defines the log level for application as DEBUG
1.Add below in build.sbt:
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.0"
2.Import following in your controller:
import com.typesafe.scalalogging.Logger
3.Use
private val logger = Logger(this.getClass)
logger.warn("your messages in here.")
Trying to use sbt with
Keys.fork := true
With this option all messages from slf4j logger shown as error-message
It looks like
[error] 0 [main] INFO test - Test
Without fork it looks like
1 [run-main] INFO test - Test
sbt version: 0.13
This is documented at http://www.scala-sbt.org/0.13.2/docs/Detailed-Topics/Forking.html , in the “Configuring output” section:
By default, forked output is sent to the Logger, with standard output logged at the Info level and standard error at the Error level. This can be configured with the outputStrategy setting,