I'm using the Elasticsearch + Hbase version of Prediction.IO from the sphereio/docker-predictionio docker image and the universal recommendation template template-scala-parallel-universal-recommendation.
pio-start-all and pio status work fine and the eventserver is prefectly functional. I have created an app and imported a few hundred events to start with.
However, after doing pio build on the template, pio train fails giving a couple of javax.naming.NameNotFoundException warnings. Even pio.log does not contain anything else.
Here's my engine.json:
{
"comment": " This config file uses default settings for all but the required values see README.md for docs",
"id": "default",
"description": "Default settings",
"engineFactory": "com.test.RecommendationEngine",
"datasource": {
"params": {
"name": "sample-handmade-data.txt",
"appName": "testapp",
"eventNames": ["START"]
}
},
"sparkConf": {
"spark.serializer": "org.apache.spark.serializer.KryoSerializer",
"spark.kryo.registrator": "org.apache.mahout.sparkbindings.io.MahoutKryoRegistrator",
"spark.kryo.referenceTracking": "false",
"spark.kryoserializer.buffer": "300m",
"spark.executor.memory": "4g",
"es.index.auto.create": "true"
},
"algorithms": [{
"comment": "simplest setup where all values are default, popularity based backfill, must add eventsNames",
"name": "ur",
"params": {
"appName": "testapp",
"indexName": "urindex",
"typeName": "items",
"comment": "must have data for the first event or the model will not build, other events are optional",
"eventNames": ["START"]
}
}]
}
And the pio train output:
[INFO] [Console$] Using existing engine manifest JSON at /PredictionIO-0.9.6/engines/universal-recommendation/manifest.json
[INFO] [Runner$] Submission command: /PredictionIO-0.9.6/vendors/spark-1.5.1-bin-hadoop2.6/bin/spark-submit --class io.prediction.workflow.CreateWorkflow --jars file:/PredictionIO-0.9.6/engines/universal-recommendation/target/scala-2.10/template-scala-parallel-universal-recommendation-assembly-0.2.3-deps.jar,file:/PredictionIO-0.9.6/engines/universal-recommendation/target/scala-2.10/template-scala-parallel-universal-recommendation_2.10-0.2.3.jar --files file:/PredictionIO-0.9.6/conf/log4j.properties,file:/PredictionIO-0.9.6/vendors/hbase-1.0.0/conf/hbase-site.xml --driver-class-path /PredictionIO-0.9.6/conf:/PredictionIO-0.9.6/vendors/hbase-1.0.0/conf file:/PredictionIO-0.9.6/lib/pio-assembly-0.9.6.jar --engine-id FYOHZGlAmUH2xAYWNmQFIf9Jls201WVr --engine-version a892fe59be15dcf27a17f07fb76135a967309fda --engine-variant file:/PredictionIO-0.9.6/engines/universal-recommendation/engine.json --verbosity 0 --json-extractor Both --env PIO_STORAGE_SOURCES_HBASE_TYPE=hbase,PIO_ENV_LOADED=1,PIO_STORAGE_REPOSITORIES_METADATA_NAME=pio_meta,PIO_VERSION=0.9.6,PIO_FS_BASEDIR=/root/.pio_store,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOSTS=localhost,PIO_STORAGE_SOURCES_HBASE_HOME=/PredictionIO-0.9.6/vendors/hbase-1.0.0,PIO_HOME=/PredictionIO-0.9.6,PIO_FS_ENGINESDIR=/root/.pio_store/engines,PIO_STORAGE_SOURCES_LOCALFS_PATH=/root/.pio_store/models,PIO_STORAGE_SOURCES_ELASTICSEARCH_TYPE=elasticsearch,PIO_STORAGE_REPOSITORIES_METADATA_SOURCE=ELASTICSEARCH,PIO_STORAGE_REPOSITORIES_MODELDATA_SOURCE=LOCALFS,PIO_STORAGE_REPOSITORIES_EVENTDATA_NAME=pio_event,PIO_STORAGE_SOURCES_ELASTICSEARCH_CLUSTERNAME=predictionio,PIO_STORAGE_SOURCES_ELASTICSEARCH_HOME=/PredictionIO-0.9.6/vendors/elasticsearch-1.4.4,PIO_FS_TMPDIR=/root/.pio_store/tmp,PIO_STORAGE_REPOSITORIES_MODELDATA_NAME=pio_model,PIO_STORAGE_REPOSITORIES_EVENTDATA_SOURCE=HBASE,PIO_CONF_DIR=/PredictionIO-0.9.6/conf,PIO_STORAGE_SOURCES_ELASTICSEARCH_PORTS=9300,PIO_STORAGE_SOURCES_LOCALFS_TYPE=localfs
[INFO] [Engine] Extracting datasource params...
[INFO] [WorkflowUtils$] No 'name' is found. Default empty String will be used.
[INFO] [Engine] Datasource params: (,DataSourceParams(testapp,List(START)))
[INFO] [Engine] Extracting preparator params...
[INFO] [Engine] Preparator params: (,Empty)
[INFO] [Engine] Extracting serving params...
[INFO] [Engine] Serving params: (,Empty)
[INFO] [Remoting] Starting remoting
[INFO] [Remoting] Remoting started; listening on addresses :[akka.tcp://sparkDriver#172.17.0.2:42582]
[WARN] [MetricsSystem] Using default name DAGScheduler for source because spark.app.id is not set.
[INFO] [Engine$] EngineWorkflow.train
[INFO] [Engine$] DataSource: com.test.DataSource#75bd28d
[INFO] [Engine$] Preparator: com.test.Preparator#13278a41
[INFO] [Engine$] AlgorithmList: List(com.test.URAlgorithm#2365ea38)
[INFO] [Engine$] Data sanity check is on.
[WARN] [TableInputFormatBase] Cannot resolve the host name for 9a94fb2890b3/172.17.0.2 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '2.0.17.172.in-addr.arpa'
[INFO] [Engine$] com.test.TrainingData does not support data sanity check. Skipping check.
[WARN] [TableInputFormatBase] Cannot resolve the host name for 9a94fb2890b3/172.17.0.2 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '2.0.17.172.in-addr.arpa'
There is one way to short out this problem. Please use google d.n.s while starting your docker container.
--dns=8.8.8.8
Related
I am trying to map my Cloudquery database (postgres) to Neo4j using ETL Tool.
Project created with the following version:
Version 4.4.5
Neoj4 Desktop version 1.4.15.
I am on OSX Monterey 12.5 (21G72), Apple M1 Pro.
For the postgres (where cloudquery is pouring the data) I am using postgres:13.7-alpine3.16 docker image.
Selected RDBMS instance (connection succeeded) and my project. Within the main.log I get:
[2022-07-28 14:27:13.192] [info] Executing '/Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/distributions/java/zulu11.54.25-ca-jdk11.0.14.1/bin/java, -cp, /Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/graphApps/_global/neo4j-etl-ui/dist/neo4j-etl.jar, org.neo4j.etl.rdbms.Support, jdbc:postgresql://localhost:5432/postgres?ssl=false, postgres, pass'
[2022-07-28 14:27:13.844] [info] Process [87889] exit with code '1', signal 'null'
[2022-07-28 14:27:43.264] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:27:43.372] [info] Online check response: 200 version: 1.4.15
file
[2022-07-28 14:28:23.273] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:28:23.382] [info] Online check response: 200 version: 1.4.15
file
[2022-07-28 14:29:03.278] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:29:03.386] [info] Online check response: 200 version: 1.4.15
file
[2022-07-28 14:29:39.909] [info] Executing '/Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/distributions/java/zulu11.54.25-ca-jdk11.0.14.1/bin/java, -cp, /Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/graphApps/_global/neo4j-etl-ui/dist/neo4j-etl.jar, org.neo4j.etl.rdbms.Support, jdbc:postgresql://localhost:5432/postgres?ssl=false, postgres, pass'
[2022-07-28 14:29:40.771] [info] Process [88036] exit with code '0', signal 'null'
[2022-07-28 14:29:43.282] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:29:43.338] [info] Online check response: 200 version: 1.4.15
file
[2022-07-28 14:29:44.217] [info] Executing '/Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/distributions/java/zulu11.54.25-ca-jdk11.0.14.1/bin/java, -cp, /Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/graphApps/_global/neo4j-etl-ui/dist/neo4j-etl.jar, org.neo4j.etl.NeoIntegrationCli, generate-metadata-mapping, --rdbms:url, jdbc:postgresql://localhost:5432/postgres?ssl=false, --rdbms:password, pass, --rdbms:user, postgres, --output-mapping-file, /var/folders/nl/301vjtr53s92b4wr66tchrjr0000gn/T/postgresql_postgres_mapping.json'
[2022-07-28 14:30:04.214] [info] Process [88039] exit with code '0', signal 'null'
[2022-07-28 14:30:04.216] [info] Executing '/Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/distributions/java/zulu11.54.25-ca-jdk11.0.14.1/bin/java, -cp, /Users/stevesolun/Library/Application Support/Neo4j Desktop/Application/graphApps/_global/neo4j-etl-ui/dist/neo4j-etl.jar, org.neo4j.etl.util.FileUtils, readfile, /var/folders/nl/301vjtr53s92b4wr66tchrjr0000gn/T/postgresql_postgres_mapping.json'
[2022-07-28 14:30:04.616] [info] Process [88058] exit with code '143', signal 'null'
[2022-07-28 14:30:23.288] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:30:23.596] [info] Online check response: 200 version: 1.4.15
file
[2022-07-28 14:31:03.292] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:31:03.344] [info] Online check response: 200 version: 1.4.15
file
[2022-07-28 14:31:43.296] [info] Online check request: https://dist.neo4j.org/neo4j-desktop/win/latest.yml
[2022-07-28 14:31:43.459] [info] Online check response: 200 version: 1.4.15
It seems I do get the mapping file but the mapping process takes ages without finishing.
Please advise what should I do to make it work?
Can I upload manually the mapping file? How can I force it to do the mapping? Is it ok it takes so long?
I am attempting to deploy with maven inside eclipse Version: 2021-06 (4.20.0)
Build id: 20210612-2011 after upgrading Eclipse when CodeMix ruined my previous eclipse install (whihc worked). I can successfully deploy from the command line and I have set Eclipse to read my settings file:
[DEBUG] Reading global settings from C:\Users\neilb\.m2\settings.xml
[DEBUG] Reading user settings from C:\Users\neilb\.m2\settings.xml
The pom is pointing to the correct repos and the details in the settings are correct and we know this because it deploys in the console. However when I try to deploy inside Eclipse I get a 401:
[DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy' with basic configurator -->
[DEBUG] (f) artifact = com.ziath.datapaq:scanner:jar:1.5.29
[DEBUG] (f) attachedArtifacts = []
[DEBUG] (s) localRepository = id: local
url: file:///C:/Users/neilb/.m2/repository/
layout: default
snapshots: [enabled => true, update => always]
releases: [enabled => true, update => always]
blocked: false
[DEBUG] (f) offline = false
[DEBUG] (f) packaging = jar
[DEBUG] (f) pomFile = C:\Users\neilb\Documents\GitHub\scanner\pom.xml
[DEBUG] (f) project = MavenProject: com.ziath.datapaq:scanner:1.5.29 # C:\Users\neilb\Documents\GitHub\scanner\pom.xml
[DEBUG] (f) retryFailedDeploymentCount = 1
[DEBUG] (f) skip = false
[DEBUG] (f) updateReleaseInfo = false
[DEBUG] -- end configuration --
[DEBUG] Using connector AetherRepositoryConnector with priority 100.0 for http://10.9.8.246:8081/repository/lib-releases/ with username=admin, password=***
[INFO] Uploading to : http://10.9.8.246:8081/repository/lib-releases/com/ziath/datapaq/scanner/1.5.29/scanner-1.5.29.pom
[INFO] Uploading to : http://10.9.8.246:8081/repository/lib-releases/com/ziath/datapaq/scanner/1.5.29/scanner-1.5.29.jar
[INFO] Uploaded to : http://10.9.8.246:8081/repository/lib-releases/com/ziath/datapaq/scanner/1.5.29/scanner-1.5.29.pom (10 kB at 92 kB/s)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
The exception is as follows:
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy (default-deploy) on project scanner: Failed to deploy artifacts: Could not transfer artifact com.ziath.datapaq:scanner:jar:1.5.29 from/to releases (http://10.9.8.246:8081/repository/lib-releases/): Access denied to http://10.9.8.246:8081/repository/lib-releases/com/ziath/datapaq/scanner/1.5.29/scanner-1.5.29.jar. Error code 401, Unauthorized -> [Help 1]
Has anyone any idea as to what needs to be changed in order to get deployment from Eclipse version 2021-06 to work?
Cheers,
Neil
This is a known bug:
https://github.com/eclipse-m2e/m2e-core/issues/250
There are workarounds listed there; so to the person who downgraded the question - ya boo sucks!
I'm trying to run a Lagom service in production mode in an Akka cluster, which is configured via Akka Cluster Bootstrap as described in https://www.lagomframework.com/documentation/1.5.x/scala/Cluster.html. (I was able to run the app by specifying seed nodes manually). However, I could not manage to start the service. I have the following setup:
application.conf (only the cluster related configs)
akka.management.cluster.bootstrap {
# example using kubernetes-api
contact-point-discovery {
discovery-method = akka.discovery
# discovery-method = config
service-name = "lagom-scala"
required-contact-point-nr = 0
}
}
An application loader, loading the AkkaDiscoveryComponents in the production mode as described here (https://www.lagomframework.com/documentation/1.5.x/scala/AkkaDiscoveryIntegration.html):
class LagomscalaLoader extends LagomApplicationLoader {
override def load(context: LagomApplicationContext): LagomApplication =
new LagomscalaApplication(context) with AkkaDiscoveryComponents
override def loadDevMode(context: LagomApplicationContext): LagomApplication =
new LagomscalaApplication(context) with LagomDevModeComponents
override def describeService = Some(readDescriptor[LagomscalaService])
}
I get the following logs when required-contact-point-nr is set to 0:
2019-10-28T23:48:54.867Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [sourceThread=application-akka.actor.default-dispatcher-26, akkaTimestamp=23:48:54.867UTC, akkaSource=akka.tcp://application#192.168.0.34:2552/system/bootstrapCoordinator, sourceActorSystem=application] - Looking up [Lookup(lagom-scala,None,Some(tcp))]
2019-10-28T23:48:54.886Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [sourceThread=application-akka.actor.default-dispatcher-22, akkaTimestamp=23:48:54.886UTC, akkaSource=akka.tcp://application#192.168.0.34:2552/system/bootstrapCoordinator, sourceActorSystem=application] - Located service members based on: [Lookup(lagom-scala,None,Some(tcp))]: [], filtered to []
2019-10-28T23:48:55.957Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [sourceThread=application-akka.actor.default-dispatcher-16, akkaTimestamp=23:48:55.957UTC, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application] - Exceeded stable margins without locating seed-nodes, however this node 192.168.0.34:8558 is NOT the lowest address out of the discovered endpoints in this deployment, thus NOT joining self. Expecting node [] (out of []) to perform the self-join and initiate the cluster.
When I set required-contact-point-nr to 2 (default), I get the following logs:
2019-10-29T00:15:57.846Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [sourceThread=application-akka.actor.default-dispatcher-23, akkaTimestamp=00:15:57.846UTC, akkaSource=akka.tcp://application#192.168.0.34:2552/system/bootstrapCoordinator, sourceActorSystem=application] - Looking up [Lookup(lagom-scala,None,Some(tcp))]
2019-10-29T00:15:57.865Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [sourceThread=application-akka.actor.default-dispatcher-4, akkaTimestamp=00:15:57.865UTC, akkaSource=akka.tcp://application#192.168.0.34:2552/system/bootstrapCoordinator, sourceActorSystem=application] - Located service members based on: [Lookup(lagom-scala,None,Some(tcp))]: [], filtered to []
2019-10-29T00:15:58.299Z [info] akka.management.cluster.bootstrap.LowestAddressJoinDecider [sourceThread=application-akka.actor.default-dispatcher-3, akkaTimestamp=00:15:58.299UTC, akkaSource=LowestAddressJoinDecider(akka://application), sourceActorSystem=application] - Discovered [0] contact points, confirmed [0], which is less than the required [2], retrying
2019-10-29T00:15:58.599Z [warn] akka.cluster.sharding.ShardRegion [sourceThread=application-akka.actor.default-dispatcher-4, akkaTimestamp=00:15:58.597UTC, akkaSource=akka.tcp://application#192.168.0.34:2552/system/sharding/kafkaProducer-greetings, sourceActorSystem=application] - kafkaProducer-greetings: No coordinator found to register. Probably, no seed-nodes configured and manual cluster join not performed? Total [1] buffered messages.
I use Akka 2.5.25 and default configurations except the ones I specified above. E.g. I see the following logs that might be of relevance after running the service:
2019-10-29T00:15:44.987Z [info] akka.remote.Remoting [sourceThread=main, akkaTimestamp=00:15:44.987UTC, akkaSource=akka.remote.Remoting, sourceActorSystem=application] - Remoting now listens on addresses: [akka.tcp://application#192.168.0.34:2552]
2019-10-29T00:15:45.276Z [info] akka.cluster.Cluster(akka://application) [sourceThread=application-akka.actor.default-dispatcher-2, akkaSource=akka.cluster.Cluster(akka://application), sourceActorSystem=application, akkaTimestamp=00:15:45.275UTC] - Cluster Node [akka.tcp://application#192.168.0.34:2552] - No seed-nodes configured, manual cluster join required, see https://doc.akka.io/docs/akka/current/cluster-usage.html#joining-to-seed-nodes
2019-10-29T00:15:46.411Z [info] akka.management.cluster.bootstrap.ClusterBootstrap [sourceThread=main, akkaTimestamp=00:15:46.411UTC, akkaSource=ClusterBootstrap(akka://application), sourceActorSystem=application] - Using self contact point address: http://192.168.0.34:8558
2019-10-29T00:15:48.164Z [info] akka.management.scaladsl.AkkaManagement [sourceThread=application-akka.actor.default-dispatcher-24, akkaSource=AkkaManagement(akka://application), sourceActorSystem=application, akkaTimestamp=00:15:48.163UTC] - Bound Akka Management (HTTP) endpoint to: 192.168.0.34:8558
2019-10-29T00:15:48.286Z [info] akka.management.cluster.bootstrap.internal.BootstrapCoordinator [sourceThread=application-akka.actor.default-dispatcher-24, akkaSource=akka.tcp://application#192.168.0.34:2552/system/bootstrapCoordinator, sourceActorSystem=application, akkaTimestamp=00:15:48.285UTC] - Locating service members. Using discovery [akka.discovery.aggregate.AggregateServiceDiscovery], join decider [akka.management.cluster.bootstrap.LowestAddressJoinDecider]
2019-10-29T00:15:48.772Z [info] play.core.server.AkkaHttpServer [] - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
So, I think there is a mismatch between the ports but I couldn't figure out how to fix it. Thanks for your help.
It seems the problem was a missing DNS server in the local settings and the fact that Lagom itself does not provide a DNS server
as stated by #tim-moore at Lightbend forum.
I am using the latest version of Play framework, along with the following test dependencies from by build.sbt file:
"org.scalatest" %% "scalatest" % "3.0.0",
"org.scalatestplus.play" % "scalatestplus-play_2.11" % "2.0.0-M1"
I have a base specification all of my test cases extend from. I return a Future[Assertion] in each one of my clauses, it looks like this:
trait BaseSpec extends AsyncWordSpec with TestSuite with OneServerPerSuite with MustMatchers with ParallelTestExecution
An example spec looks like this:
"PUT /v1/user/create" should {
"create a new user" in {
wsClient
.url(s"http://localhost:${port}/v1/user")
.put(Json.obj(
"name" -> "username",
"email" -> "email",
"password" -> "hunter12"
)).map { response => response.status must equal(201) }
}
}
I decided to rewrite my current tests using the AsyncWordSpec provided by the newer version of ScalaTest, but when I run the test suite, this is the output that I get:
[info] UserControllerSpec:
[info] PUT /v1/user/create
[info] application - ApplicationTimer demo: Starting application at 2016-11-13T01:29:12.161Z.
[info] application - ApplicationTimer demo: Stopping application at 2016-11-13T01:29:12.416Z after 1s.
[info] application - ApplicationTimer demo: Stopping application at 2016-11-13T01:29:12.438Z after 0s.
[info] application - ApplicationTimer demo: Stopping application at 2016-11-13T01:29:12.716Z after 0s.
[info] application - ApplicationTimer demo: Stopping application at 2016-11-13T01:29:13.022Z after 1s.
[info] ScalaTest
[info] Run completed in 13 seconds, 540 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 4, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] Passed: Total 0, Failed 0, Errors 0, Passed 0
[success] Total time: 20 s, completed Nov 12, 2016 8:29:13 PM
All of my test classes are found, built, and seemingly run by the test runner when invoking sbt test. I have also tried using the IDEA test runner, and it reports Empty Test Suite under each one of my test classes. I have exhaustively attempted to RTFM but I cannot see what I am doing wrong. The synchronous versions of my tests are running totally fine.
EDIT 1: A friend suggested to attempt doing whenReady() { /* clause */ } on my Future[WSResponse], but this too has failed.
I was having the same problem while using a test suite with multiple traits. I got it to work by removing all the other traits except for AsyncFlatSpec. I will add them back one at a time as I need them.
There's an oft-asked question about changing the HTTP port to which a Play application will bind. James Ward's answer is generally accepted as the most complete, but it involves overriding the default by setting a http.port system property. However, is it possible to change this default without having to manually add it to the run command at development time, tweak the environment, or package an override in a runtime configuration?
This can be accomplished by setting the playDefaultPort key, as follows:
import PlayKeys._
playDefaultPort := 9123
Afterwards, you'll be able to run and testProd without needing to remember the desired port.
This works in both development:
$ sbt run
[info] Loading project definition from /Users/michaelahlers/Projects/MyApp/project
[info] Set current project to MyApp (in build file:/Users/michaelahlers/Projects/MyApp/)
--- (Running the application, auto-reloading is enabled) ---
[info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9123
(Server started, use Ctrl+D to stop and go back to the console...)
And production modes:
$ sbt testProd
[info] Loading project definition from /Users/michaelahlers/Projects/MyApp/project
[info] Set current project to MyApp (in build file:/Users/michaelahlers/Projects/MyApp/)
[info] Packaging /Users/michaelahlers/Projects/MyApp/target/scala-2.11/MyApp_2.11-1.0.0-SNAPSHOT-web-assets.jar ...
[info] Done packaging.
(Starting server. Type Ctrl+D to exit logs, the server will remain in background)
2016-04-08 13:09:45,594 [info] a.e.s.Slf4jLogger - Slf4jLogger started
2016-04-08 13:09:45,655 [info] play.api.Play - Application started (Prod)
2016-04-08 13:09:45,767 [info] p.c.s.NettyServer - Listening for HTTP on /0:0:0:0:0:0:0:0:9123