testcontainer initializationError while running a test suite - docker-compose

I have multiple test classes running the same docker-compose with testcontainer.
The suite fails with initializationError although each test passes when performed separately.
Here is the relevant part of the stacktrace occuring during the second test.
./gradlew e2e:test -i
io.foo.e2e.AuthTest > initializationError FAILED
org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:330)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311)
at org.testcontainers.containers.DockerComposeContainer.startAmbassadorContainers(DockerComposeContainer.java:331)
at org.testcontainers.containers.DockerComposeContainer.start(DockerComposeContainer.java:178)
at io.foo.e2e.bases.BaseE2eTest$Companion.beforeAll$e2e(BaseE2eTest.kt:62)
at io.foo.e2e.bases.BaseE2eTest.beforeAll$e2e(BaseE2eTest.kt)
...
Caused by:
org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:323)
... 83 more
Caused by:
org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:497)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:325)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 84 more
Caused by:
org.testcontainers.containers.ContainerLaunchException: Aborting attempt to link to container btraq5fzahac_worker_1 as it is not running
at org.testcontainers.containers.GenericContainer.applyConfiguration(GenericContainer.java:779)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:359)
... 86 more
It seems to me that the second test doesn't wait for the first to shutdown previous containers.
Here the base class that all tests inherit from. It is responsible for spinning up the containers.
open class BaseE2eTest {
...
companion object {
const val A = "containera_1"
const val B = "containerb_1"
const val C = "containerc_1"
val dockerCompose: KDockerComposeContainer by lazy {
defineDockerCompose()
.withLocalCompose(true)
.withExposedService(A, 8080, Wait.forListeningPort())
.withExposedService(B, 8081)
.withExposedService(C, 5672, Wait.forListeningPort())
}
class KDockerComposeContainer(file: File) : DockerComposeContainer<KDockerComposeContainer>(file)
private fun defineDockerCompose() = KDockerComposeContainer(File("../docker-compose.yml"))
#BeforeAll
#JvmStatic
internal fun beforeAll() {
dockerCompose.start()
}
#AfterAll
#JvmStatic
internal fun afterAll() {
dockerCompose.stop()
}
}
}
docker-compose version 1.27.4, build 40524192
testcontainer 1.15.2
testcontainers:junit-jupiter:1.15.2

After watching this talk, I realized that my testcontainers instantiation approach with Junit5 was wrong.
Here is the working code:
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
open class BaseE2eTest {
...
val A = "containera_1"
val B = "containerb_1"
val C = "containerc_1"
val dockerCompose: KDockerComposeContainer by lazy {
defineDockerCompose()
.withLocalCompose(true)
.withExposedService(A, 8080, Wait.forListeningPort())
.withExposedService(B, 8081)
.withExposedService(C, 5672, Wait.forListeningPort())
}
class KDockerComposeContainer(file: File) : DockerComposeContainer<KDockerComposeContainer>(file)
private fun defineDockerCompose() = KDockerComposeContainer(File("../docker-compose.yml"))
#BeforeAll
fun beforeAll() {
dockerCompose.start()
}
#AfterAll
fun afterAll() {
dockerCompose.stop()
}
}
Now the test suite passes.

For anyone stumbling upon this, I had a similar problem that suddenly occurred after months of working perfectly fine. It was caused by this problem: Docker "ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network"
After removing all unused networks the problem was fixed

Related

Problems with database connection Ktor!!! I just don't understand why the ktor not to see the path of application.conf jbdc

I'm having a problem with my web project. It is my first time when i try connecting database. I want to connect database to see that working. I use Postgres and have set up a database. I want to connect by using the following code :
Data base settings
package com.testreftul
import com.testreftul.model.Orders
import com.testreftul.model.Products
import com.testreftul.model.User
import com.typesafe.config.ConfigFactory
import io.ktor.server.config.*
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.withContext
import org.jetbrains.exposed.sql.Database
import org.jetbrains.exposed.sql.SchemaUtils
import org.jetbrains.exposed.sql.transactions.transaction
object DbSettings {
private val appConfig = HoconApplicationConfig(ConfigFactory.load())
private var dbUrl = appConfig.property("jbdc.url").getString()
private var dbUser = appConfig.property("jbdc.username").getString()
private var dbPassword = appConfig.property("jbdc.password").getString()
fun init(dbUrl: String, dbUser: String, dbPassword: String) {
this.dbUrl = dbUrl
this.dbUser = dbUser
this.dbPassword = dbPassword
pgConnection()
transaction {
SchemaUtils.create(User, Products, Orders)
}
}
private fun pgConnection() = Database.connect(
url = dbUrl,
driver = "org.postgresql.Driver",
user = dbUser,
password = dbPassword
)
suspend fun <T> dbQuery(block:()->T): T =
withContext(Dispatchers.IO){
transaction {
block()
}
}
}
Database plugin
package com.testreftul.plugins
import com.testreftul.DbSettings
import io.ktor.server.application.*
fun Application.connectDatabase(){
val url = environment.config.property("jdbc.url").getString()
val username = environment.config.property("jdbc.username").getString()
val password = environment.config.property("jdbc.password").getString()
DbSettings.init(url,username,password)
}
Application.conf
ktor {
deployment {
port = 8080
port = ${?PORT}
}
application {
modules = [ com.testreftul.ApplicationKt.module ]
}
}
jdbc{
url = "jdbc:postgresql://localhost:5432/restest"
username = "postgres"
password = "admin"
}
build.gradle.kts
val ktor_version: String by project
val kotlin_version: String by project
val logback_version: String by project
val exposed_version:String by project
val postgresql_jdbc:String by project
plugins {
application
kotlin("jvm") version "1.6.20"
id("org.jetbrains.kotlin.plugin.serialization") version "1.6.20"
}
group = "com.testreftul"
version = "0.0.1"
application {
mainClass.set("io.ktor.server.netty.EngineMain")
val isDevelopment: Boolean = project.ext.has("development")
applicationDefaultJvmArgs = listOf("-Dio.ktor.development=$isDevelopment")
}
repositories {
mavenCentral()
maven { url = uri("https://maven.pkg.jetbrains.space/public/p/ktor/eap") }
}
dependencies {
implementation("io.ktor:ktor-server-core-jvm:$ktor_version")
implementation("io.ktor:ktor-server-content-negotiation-jvm:$ktor_version")
implementation("io.ktor:ktor-serialization-kotlinx-json-jvm:$ktor_version")
implementation("io.ktor:ktor-server-netty-jvm:$ktor_version")
implementation("ch.qos.logback:logback-classic:$logback_version")
implementation ("org.jetbrains.exposed:exposed-core:$exposed_version")
implementation ("org.jetbrains.exposed:exposed-dao:$exposed_version")
implementation ("org.jetbrains.exposed:exposed-jdbc:$exposed_version")
implementation ("org.jetbrains.exposed:exposed-java-time:$exposed_version")
implementation ("org.postgresql:postgresql:$postgresql_jdbc")
testImplementation("io.ktor:ktor-server-tests-jvm:$ktor_version")
testImplementation("org.jetbrains.kotlin:kotlin-test-junit:$kotlin_version")
}
gradle.properties
ktor_version=2.0.0-beta-1
kotlin_version=1.6.20
logback_version=1.2.3
kotlin.code.style=official
exposed_version=0.37.3
postgresql_jdbc=42.3.3
Application.kt
package com.testreftul
import io.ktor.server.application.*
import com.testreftul.plugins.*
fun main(args: Array<String>): Unit =
io.ktor.server.netty.EngineMain.main(args)
#Suppress("unused") // application.conf references the main function. This annotation prevents the IDE from marking it as unused.
fun Application.module() {
connectDatabase()
configureRouting()
configureSerialization()
}
I installed all the plugins that were needed and created a database,and also i configure my ide and connect to database with dialect postgres.When i run the console return Caused by: io.ktor.server.config.ApplicationConfigurationException: Property jbdc.url not found.
> Task :run FAILED
2022-04-06 01:55:22.805 [main] TRACE Application - {
# application.conf # file:/C:/Users/eljan/IdeaProjects/ktor-restful1/build/resources/main/application.conf: 6
"application" : {
# application.conf # file:/C:/Users/eljan/IdeaProjects/ktor-restful1/build/resources/main/application.conf: 7
"modules" : [
# application.conf # file:/C:/Users/eljan/IdeaProjects/ktor-restful1/build/resources/main/application.conf: 7
"com.testreftul.ApplicationKt.module"
]
},
# application.conf # file:/C:/Users/eljan/IdeaProjects/ktor-restful1/build/resources/main/application.conf: 2
"deployment" : {
# application.conf # file:/C:/Users/eljan/IdeaProjects/ktor-restful1/build/resources/main/application.conf: 3
"port" : 8080
},
# Content hidden
"security" : "***"
}
2022-04-06 01:55:22.976 [main] INFO Application - Autoreload is disabled because the development mode is off.
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.testreftul.plugins.DatabaseKt.connectDatabase(Database.kt:11)
at com.testreftul.ApplicationKt.module(Application.kt:11)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at kotlin.reflect.jvm.internal.calls.CallerImpl$Method.callMethod(CallerImpl.kt:97)
at kotlin.reflect.jvm.internal.calls.CallerImpl$Method$Static.call(CallerImpl.kt:106)
at kotlin.reflect.jvm.internal.KCallableImpl.call(KCallableImpl.kt:108)
at kotlin.reflect.jvm.internal.KCallableImpl.callDefaultMethod$kotlin_reflection(KCallableImpl.kt:159)
at kotlin.reflect.jvm.internal.KCallableImpl.callBy(KCallableImpl.kt:112)
at io.ktor.server.engine.internal.CallableUtilsKt.callFunctionWithInjection(CallableUtils.kt:119)
at io.ktor.server.engine.internal.CallableUtilsKt.executeModuleFunction(CallableUtils.kt:36)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:333)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$launchModuleByName$1.invoke(ApplicationEngineEnvironmentReloading.kt:332)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartupFor(ApplicationEngineEnvironmentReloading.kt:357)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.launchModuleByName(ApplicationEngineEnvironmentReloading.kt:332)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.access$launchModuleByName(ApplicationEngineEnvironmentReloading.kt:32)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:313)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:311)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartup(ApplicationEngineEnvironmentReloading.kt:339)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:311)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:144)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:278)
at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:183)
at io.ktor.server.netty.EngineMain.main(EngineMain.kt:26)
Caused by: io.ktor.server.config.ApplicationConfigurationException: Property jbdc.url not found.
at io.ktor.server.config.HoconApplicationConfig.property(HoconApplicationConfig.kt:15)
at com.testreftul.DbSettings.<clinit>(DbSettings.kt:16)
... 26 more
Caused by: io.ktor.server.config.ApplicationConfigurationException: Property jbdc.url not found.
Execution failed for task ':run'.
> Process 'command 'C:\Users\eljan\.jdks\corretto-16.0.2\bin\java.exe'' finished with non-zero exit value 1
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
You have a typo in config's property names in the definition of default values for private variables. It should be jdbc instead of jbdc:
private var dbUrl = appConfig.property("jbdc.url").getString()
private var dbUser = appConfig.property("jbdc.username").getString()
private var dbPassword = appConfig.property("jbdc.password").getString()

Loader constraint violation when using Dropwizard Histogram metric in Apache Flink

I am running a streaming job on Flink 1.9.1 cluster and trying to get a histogram of values into our Prometheus metric collector. Per recommendation in Flink docs, I used Dropwizard histogram implementation with the Flink-provided wrapper, however when submitting job onto cluster, it crashes with the following traceback:
java.lang.LinkageError: loader constraint violation: when resolving method "org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper.<init>(Lcom/codahale/metrics/Histogram;)V" the class loader (instance of org/apache/flink/util/ChildFirstClassLoader) of the current class, com/example/foo/metrics/FooMeter, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for the method's defining class, org/apache/flink/dropwizard/metrics/DropwizardHistogramWrapper, have different Class objects for the type com/codahale/metrics/Histogram used in the signature
at com.example.foo.metrics.FooMeter.<init>(FooMeter.scala:11)
at com.example.foo.transform.ValidFoos$.open(ValidFoos.scala:15)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamFlatMap.open(StreamFlatMap.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:532)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:396)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
I have found similar error in the mailing list, however using shadowJar plugin in Gradle didn't help.
Is there something I am missing?
Relevant code here:
import com.codahale.metrics.{Histogram, SlidingWindowReservoir}
import org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper
import org.apache.flink.metrics.{MetricGroup, Histogram => FlinkHistogram}
class FooMeter(metricGroup: MetricGroup, name: String) {
private var histogram: FlinkHistogram = metricGroup.histogram(
name, new DropwizardHistogramWrapper(new Histogram(new SlidingWindowReservoir(500))))
def record(fooValue: Long): Unit = {
histogram.update(fooValue)
}
}
object ValidFoos extends RichFlatMapFunction[Try[FooData], Foo] {
#transient private var fooMeter: FooMeter = _
override def open(parameters: Configuration): Unit = {
fooMeter = new FooMeter(getRuntimeContext.getMetricGroup, "foo_values")
}
override def flatMap(value: Try[FooData], out: Collector[FooData]): Unit = {
Transform.validFoo(value) foreach (foo => {
fooMeter.record(foo.value)
out.collect(foo)
})
}
}
build.gradle:
plugins {
id 'scala'
id 'application'
id 'com.github.johnrengelman.shadow' version '2.0.4'
}
ext {
flinkVersion = "1.9.1"
scalaBinaryVersion = "2.11"
scalaVersion = "2.11.12"
}
dependencies {
implementation(
"org.apache.flink:flink-streaming-scala_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-connector-kafka_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-runtime-web_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-json:${flinkVersion}"
"org.apache.flink:flink-metrics-dropwizard:${flinkVersion}",
"org.scala-lang:scala-library:${scalaVersion}",
)
}
shadowJar {
relocate("org.apache.flink.dropwizard", "com.example.foo.shaded.dropwizard")
relocate("com.codahale", "com.example.foo.shaded.codahale")
}
jar {
zip64 = true
archiveName = rootProject.name + '-all.jar'
manifest {
attributes('Main-Class': 'com.example.foo.Foo')
}
from {
configurations.compileClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
configurations.runtimeClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
}
}
Further Info:
Running the code locally works
Flink cluster is custom-compiled with following directory structure:
# find /usr/lib/flink/
/usr/lib/flink/
/usr/lib/flink/plugins
/usr/lib/flink/plugins/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/plugins/flink-s3-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/plugins/flink-cep_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-python_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-queryable-state-runtime_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-sql-client_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/plugins/flink-state-processor-api_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-oss-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/plugins/flink-swift-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-azure-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/plugins/flink-shaded-netty-tcnative-dynamic-2.0.25.Final-7.0.jar
/usr/lib/flink/plugins/flink-s3-fs-presto-1.9.1.jar
/usr/lib/flink/plugins/flink-cep-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly_2.11-1.9.1.jar
/usr/lib/flink/lib
/usr/lib/flink/lib/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/lib/flink-table_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/lib/log4j-1.2.17.jar
/usr/lib/flink/lib/slf4j-log4j12-1.7.15.jar
/usr/lib/flink/lib/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/lib/flink-table-blink_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-dist_2.11-1.9.1.jar
/usr/lib/flink/bin/...

Akka: Simple Akka Cluster not working.

We are creating simple akka cluster sample and follow Akka In Action book. we are creating 3 seed nodes as mention in following code:
akka {
loglevel = INFO
stdout-loglevel = INFO
event-handlers = ["akka.event.Logging$DefaultLogger"]
log-dead-letters = 0
log-dead-letters-during-shutdown = off
actor {
provider = cluster
}
remote {
enabled-transport = ["akka.remote.netty.tcp"]
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
hostname = ${?HOST}
port = ${?PORT}
}
}
cluster {
seed-nodes = [
"akka.tcp://words#127.0.0.1:2551",
"akka.tcp://words#127.0.0.1:2552",
"akka.tcp://words#127.0.0.1:2553"
]
roles = ["seed"]
role {
seed.min-nr-of-members = 1
}
}
}
Actor system code:
object Launcher extends App {
val seedConfig = ConfigFactory.load("seed")
val seedSystem = ActorSystem("words", seedConfig)
}
When start actor system from one terminal, the seed node load but according to logs, port 2552 is used but my expectations are 2551. After opening second terminal when trying to run again actor system, I am facing following exception:
[ERROR] [03/10/2017 15:51:45.245] [words-akka.remote.default-remote-dispatcher-13] [NettyTransport(akka://words)] failed to bind to /127.0.0.1:2552, shutting down Netty transport
[error] (run-main-0) org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at akka.remote.transport.netty.NettyTransport.$anonfun$listen$1(NettyTransport.scala:417)
at scala.util.Success.$anonfun$map$1(Try.scala:251)
at scala.util.Success.map(Try.scala:209)
This error is generate because, in first terminal our port 2552 is used but not sure why second time same port is use. Our assumptions are, maybe configurations is not loaded. So, how we can fix this ?

Debugging a standalone jetty server - how to specify single threaded mode?

I have successfully created a standalone Scalatra / Jetty server, using the official instructions from Scalatra ( http://www.scalatra.org/2.3/guides/deployment/standalone.html )
I am debugging it under Ensime, and would like to limit the number of threads handling messages to a single one - so that single-stepping through the servlet methods will be easier.
I used this code to achieve it:
package ...
import org.eclipse.jetty.server.Server
import org.eclipse.jetty.servlet.{DefaultServlet, ServletContextHandler}
import org.eclipse.jetty.webapp.WebAppContext
import org.scalatra.servlet.ScalatraListener
import org.eclipse.jetty.util.thread.QueuedThreadPool
import org.eclipse.jetty.server.ServerConnector
object JettyLauncher {
def main(args: Array[String]) {
val port =
if (System.getenv("PORT") != null)
System.getenv("PORT").toInt
else
4080
// DEBUGGING MODE BEGINS
val threadPool = new QueuedThreadPool()
threadPool.setMaxThreads(8)
val server = new Server(threadPool)
val connector = new ServerConnector(server)
connector.setPort(port)
server.setConnectors(Array(connector))
// DEBUGGING MODE ENDS
val context = new WebAppContext()
context setContextPath "/"
context.setResourceBase("src/main/webapp")
context.addEventListener(new ScalatraListener)
context.addServlet(classOf[DefaultServlet], "/")
server.setHandler(context)
server.start
server.join
}
}
It works fine - except for one minor detail...
I can't tell Jetty to use 1 thread - the minimum value is 8!
If I do, this is what happens:
$ sbt assembly
...
$ java -jar ./target/scala-2.11/CurrentVersions-assembly-0.1.0-SNAPSHOT.jar
18:13:27.059 [main] INFO org.eclipse.jetty.util.log - Logging initialized #41ms
18:13:27.206 [main] INFO org.eclipse.jetty.server.Server - jetty-9.1.z-SNAPSHOT
18:13:27.220 [main] WARN o.e.j.u.component.AbstractLifeCycle - FAILED org.eclipse.jetty.server.Server#1ac539f: java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
java.lang.IllegalStateException: Insufficient max threads in ThreadPool: max=1 < needed=8
...which is why you see setMaxThreads(8) instead of setMaxThreads(1) in my code above.
Any ideas why this happens?
The reason is that the size of the threadpool also depends on th enumber of connectors you've got defined. If you look at the source code of the jetty server you'll see this:
// check size of thread pool
SizedThreadPool pool = getBean(SizedThreadPool.class);
int max=pool==null?-1:pool.getMaxThreads();
int selectors=0;
int acceptors=0;
if (mex.size()==0)
{
for (Connector connector : _connectors)
{
if (connector instanceof AbstractConnector)
acceptors+=((AbstractConnector)connector).getAcceptors();
if (connector instanceof ServerConnector)
selectors+=((ServerConnector)connector).getSelectorManager().getSelectorCount();
}
}
int needed=1+selectors+acceptors;
if (max>0 && needed>max)
throw new IllegalStateException(String.format("Insufficient threads: max=%d < needed(acceptors=%d + selectors=%d + request=1)",max,acceptors,selectors));
So the minimum with a single serverconnector is 2. It looks like you've got a couple of other default connectors or selectors running.

Scalate ResourceNotFoundException in Scalatra

I'm trying the following based on scalatra-sbt.g8:
class FooWeb extends ScalatraServlet with ScalateSupport {
beforeAll { contentType = "text/html" }
get("/") {
templateEngine.layout("/WEB-INF/scalate/templates/hello-scalate.jade")
}
}
but I'm getting the following exception (even though the file exists) - any clues?
Could not load resource: [/WEB-INF/scalate/templates/hello-scalate.jade]; are you sure it's within [null]?
org.fusesource.scalate.util.ResourceNotFoundException: Could not load resource: [/WEB-INF/scalate/templates/hello-scalate.jade]; are you sure it's within [null]?
FWIW, the innermost exception is coming from org.mortbay.jetty.handler.ContextHandler.getResource line 1142: _baseResource==null.
Got an answer from the scalatra mailing list. The problem was that I was starting the Jetty server with:
import org.mortbay.jetty.Server
import org.mortbay.jetty.servlet.{Context,ServletHolder}
val server = new Server(8080)
val root = new Context(server, "/", Context.SESSIONS)
root.addServlet(new ServletHolder(new FooWeb()), "/*")
server.start()
I needed to insert this before start():
root.setResourceBase("src/main/webapp")