Loader constraint violation when using Dropwizard Histogram metric in Apache Flink - scala

I am running a streaming job on Flink 1.9.1 cluster and trying to get a histogram of values into our Prometheus metric collector. Per recommendation in Flink docs, I used Dropwizard histogram implementation with the Flink-provided wrapper, however when submitting job onto cluster, it crashes with the following traceback:
java.lang.LinkageError: loader constraint violation: when resolving method "org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper.<init>(Lcom/codahale/metrics/Histogram;)V" the class loader (instance of org/apache/flink/util/ChildFirstClassLoader) of the current class, com/example/foo/metrics/FooMeter, and the class loader (instance of sun/misc/Launcher$AppClassLoader) for the method's defining class, org/apache/flink/dropwizard/metrics/DropwizardHistogramWrapper, have different Class objects for the type com/codahale/metrics/Histogram used in the signature
at com.example.foo.metrics.FooMeter.<init>(FooMeter.scala:11)
at com.example.foo.transform.ValidFoos$.open(ValidFoos.scala:15)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamFlatMap.open(StreamFlatMap.java:43)
at org.apache.flink.streaming.runtime.tasks.StreamTask.openAllOperators(StreamTask.java:532)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:396)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)
I have found similar error in the mailing list, however using shadowJar plugin in Gradle didn't help.
Is there something I am missing?
Relevant code here:
import com.codahale.metrics.{Histogram, SlidingWindowReservoir}
import org.apache.flink.dropwizard.metrics.DropwizardHistogramWrapper
import org.apache.flink.metrics.{MetricGroup, Histogram => FlinkHistogram}
class FooMeter(metricGroup: MetricGroup, name: String) {
private var histogram: FlinkHistogram = metricGroup.histogram(
name, new DropwizardHistogramWrapper(new Histogram(new SlidingWindowReservoir(500))))
def record(fooValue: Long): Unit = {
histogram.update(fooValue)
}
}
object ValidFoos extends RichFlatMapFunction[Try[FooData], Foo] {
#transient private var fooMeter: FooMeter = _
override def open(parameters: Configuration): Unit = {
fooMeter = new FooMeter(getRuntimeContext.getMetricGroup, "foo_values")
}
override def flatMap(value: Try[FooData], out: Collector[FooData]): Unit = {
Transform.validFoo(value) foreach (foo => {
fooMeter.record(foo.value)
out.collect(foo)
})
}
}
build.gradle:
plugins {
id 'scala'
id 'application'
id 'com.github.johnrengelman.shadow' version '2.0.4'
}
ext {
flinkVersion = "1.9.1"
scalaBinaryVersion = "2.11"
scalaVersion = "2.11.12"
}
dependencies {
implementation(
"org.apache.flink:flink-streaming-scala_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-connector-kafka_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-runtime-web_${scalaBinaryVersion}:${flinkVersion}",
"org.apache.flink:flink-json:${flinkVersion}"
"org.apache.flink:flink-metrics-dropwizard:${flinkVersion}",
"org.scala-lang:scala-library:${scalaVersion}",
)
}
shadowJar {
relocate("org.apache.flink.dropwizard", "com.example.foo.shaded.dropwizard")
relocate("com.codahale", "com.example.foo.shaded.codahale")
}
jar {
zip64 = true
archiveName = rootProject.name + '-all.jar'
manifest {
attributes('Main-Class': 'com.example.foo.Foo')
}
from {
configurations.compileClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
configurations.runtimeClasspath.collect {
it.isDirectory() ? it : zipTree(it)
}
}
}
Further Info:
Running the code locally works
Flink cluster is custom-compiled with following directory structure:
# find /usr/lib/flink/
/usr/lib/flink/
/usr/lib/flink/plugins
/usr/lib/flink/plugins/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/plugins/flink-s3-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/plugins/flink-cep_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-python_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-queryable-state-runtime_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-sql-client_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/plugins/flink-state-processor-api_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-oss-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/plugins/flink-swift-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-azure-fs-hadoop-1.9.1.jar
/usr/lib/flink/plugins/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/plugins/flink-shaded-netty-tcnative-dynamic-2.0.25.Final-7.0.jar
/usr/lib/flink/plugins/flink-s3-fs-presto-1.9.1.jar
/usr/lib/flink/plugins/flink-cep-scala_2.11-1.9.1.jar
/usr/lib/flink/plugins/flink-gelly_2.11-1.9.1.jar
/usr/lib/flink/lib
/usr/lib/flink/lib/flink-metrics-influxdb-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-graphite-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-prometheus-1.9.1.jar
/usr/lib/flink/lib/flink-table_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-slf4j-1.9.1.jar
/usr/lib/flink/lib/log4j-1.2.17.jar
/usr/lib/flink/lib/slf4j-log4j12-1.7.15.jar
/usr/lib/flink/lib/flink-metrics-statsd-1.9.1.jar
/usr/lib/flink/lib/flink-metrics-datadog-1.9.1.jar
/usr/lib/flink/lib/flink-table-blink_2.11-1.9.1.jar
/usr/lib/flink/lib/flink-dist_2.11-1.9.1.jar
/usr/lib/flink/bin/...

Related

testcontainer initializationError while running a test suite

I have multiple test classes running the same docker-compose with testcontainer.
The suite fails with initializationError although each test passes when performed separately.
Here is the relevant part of the stacktrace occuring during the second test.
./gradlew e2e:test -i
io.foo.e2e.AuthTest > initializationError FAILED
org.testcontainers.containers.ContainerLaunchException: Container startup failed
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:330)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:311)
at org.testcontainers.containers.DockerComposeContainer.startAmbassadorContainers(DockerComposeContainer.java:331)
at org.testcontainers.containers.DockerComposeContainer.start(DockerComposeContainer.java:178)
at io.foo.e2e.bases.BaseE2eTest$Companion.beforeAll$e2e(BaseE2eTest.kt:62)
at io.foo.e2e.bases.BaseE2eTest.beforeAll$e2e(BaseE2eTest.kt)
...
Caused by:
org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:323)
... 83 more
Caused by:
org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:497)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:325)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 84 more
Caused by:
org.testcontainers.containers.ContainerLaunchException: Aborting attempt to link to container btraq5fzahac_worker_1 as it is not running
at org.testcontainers.containers.GenericContainer.applyConfiguration(GenericContainer.java:779)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:359)
... 86 more
It seems to me that the second test doesn't wait for the first to shutdown previous containers.
Here the base class that all tests inherit from. It is responsible for spinning up the containers.
open class BaseE2eTest {
...
companion object {
const val A = "containera_1"
const val B = "containerb_1"
const val C = "containerc_1"
val dockerCompose: KDockerComposeContainer by lazy {
defineDockerCompose()
.withLocalCompose(true)
.withExposedService(A, 8080, Wait.forListeningPort())
.withExposedService(B, 8081)
.withExposedService(C, 5672, Wait.forListeningPort())
}
class KDockerComposeContainer(file: File) : DockerComposeContainer<KDockerComposeContainer>(file)
private fun defineDockerCompose() = KDockerComposeContainer(File("../docker-compose.yml"))
#BeforeAll
#JvmStatic
internal fun beforeAll() {
dockerCompose.start()
}
#AfterAll
#JvmStatic
internal fun afterAll() {
dockerCompose.stop()
}
}
}
docker-compose version 1.27.4, build 40524192
testcontainer 1.15.2
testcontainers:junit-jupiter:1.15.2
After watching this talk, I realized that my testcontainers instantiation approach with Junit5 was wrong.
Here is the working code:
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
open class BaseE2eTest {
...
val A = "containera_1"
val B = "containerb_1"
val C = "containerc_1"
val dockerCompose: KDockerComposeContainer by lazy {
defineDockerCompose()
.withLocalCompose(true)
.withExposedService(A, 8080, Wait.forListeningPort())
.withExposedService(B, 8081)
.withExposedService(C, 5672, Wait.forListeningPort())
}
class KDockerComposeContainer(file: File) : DockerComposeContainer<KDockerComposeContainer>(file)
private fun defineDockerCompose() = KDockerComposeContainer(File("../docker-compose.yml"))
#BeforeAll
fun beforeAll() {
dockerCompose.start()
}
#AfterAll
fun afterAll() {
dockerCompose.stop()
}
}
Now the test suite passes.
For anyone stumbling upon this, I had a similar problem that suddenly occurred after months of working perfectly fine. It was caused by this problem: Docker "ERROR: could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network"
After removing all unused networks the problem was fixed

Proto + Kafka Producer: Error serializing object to JSON: Direct self-reference leading to cycle (through reference chain unknownFields

Stack: Micronaut-Kafka, Proto3, gRPC, BloomRPC
Goal: reiceve a grpc request from BloomRPC and post it to a kafka topic
Context: I coded a gRPC endpoint which receives a simple call successfully but when trying to post such object to Kafka topic I get
12:38:18.775 [DefaultDispatcher-worker-1] ERROR i.m.r.intercept.RecoveryInterceptor - Type [com.mybank.producer.DebitProducer$Intercepted] executed with error: Exception sending producer record for method [void sendRequestMessage(String key,DebitRequest message)]: Error serializing object to JSON: Direct self-reference leading to cycle (through reference chain: com.mybank.endpoint.DebitRequest["unknownFields"]->com.google.protobuf.UnknownFieldSet["defaultInstanceForType"])
io.micronaut.messaging.exceptions.MessagingClientException: Exception sending producer record for method [void sendRequestMessage(String key,DebitRequest message)]: Error serializing object to JSON: Direct self-reference leading to cycle (through reference chain: com.mybank.endpoint.DebitRequest["unknownFields"]->com.google.protobuf.UnknownFieldSet["defaultInstanceForType"])
at io.micronaut.configuration.kafka.intercept.KafkaClientIntroductionAdvice.wrapException(KafkaClientIntroductionAdvice.java:564)
I guess somehow my issue is related to "...Error serializing object to JSON". Do I really have to convert such object to json? Can I not just post it to kafka topic? What I am missing here? Well, I can easily fix this issue by creating a new object and setting each property to it accordingly to the request object like this:
endpoint (controller) method
override suspend fun sendDebit(request: DebitRequest): DebitReply {
var dtoDebiter: Debiter = Debiter()
dtoDebiter.id = request.id.toString()
dtoDebiter.name = request.name
transactionService.postDebitTransaction(dtoDebiter)
return DebitReply.newBuilder().setMessage(postStatus).build()
}
Model created only for conversion (doesn't seem weird this solution???)
class Debiter {
lateinit var id: String
lateinit var name: String
}
So, my straight question is: can I reuse the same autogenerated stub resulted from the proto file as a model both for receiving the request and posting it to kafka topic? If so, what is wrong in code bellow? If not, what is the recommended approach?
Full code:
transaction.proto
syntax = "proto3";
option java_multiple_files = true;
option java_package = "com.mybank.endpoint";
option java_outer_classname = "TransactionProto";
option objc_class_prefix = "HLW";
package com.mybank.endpoint;
service Account {
rpc SendDebit (DebitRequest) returns (DebitReply) {}
}
message DebitRequest {
int64 id = 1;
string name = 2;
}
message DebitReply {
string message = 1;
}
transaction endpoint (controller)
package com.mybank.endpoint
import com.mybank.dto.Debiter
import com.mybank.service.TransactionService
import javax.inject.Singleton
#Singleton
#Suppress("unused")
class TransactionEndpoint(val transactionService: TransactionService) : AccountGrpcKt.AccountCoroutineImplBase(){
override suspend fun sendDebit(request: DebitRequest): DebitReply {
var postStatus: String = transactionService.postDebitTransaction(request)
return DebitReply.newBuilder().setMessage(postStatus).build()
}
}
transaction service (not really relevant here)
package com.mybank.service
import com.mybank.dto.Debiter
import com.mybank.dto.Transaction
import com.mybank.dto.Transactions
import com.mybank.endpoint.DebitRequest
import com.mybank.producer.DebitProducer
import javax.inject.Inject
import javax.inject.Named
import javax.inject.Singleton
#Singleton
class TransactionService(){
#Inject
#Named("debitProducer")
lateinit var debitProducer : DebitProducer
fun postDebitTransaction(debit: DebitRequest) : String{
debitProducer.sendRequestMessage ("1", debit)
return "posted"
}
}
kafka producer
package com.mybank.producer
import com.mybank.dto.Transaction
import com.mybank.dto.Transactions
import com.mybank.endpoint.DebitRequest
import io.micronaut.configuration.kafka.annotation.KafkaClient
import io.micronaut.configuration.kafka.annotation.KafkaKey
import io.micronaut.configuration.kafka.annotation.Topic
#KafkaClient
public interface DebitProducer {
#Topic("debit")
fun sendRequestMessage(#KafkaKey key: String?, message: DebitRequest?) {
}
}
build.gradle
plugins {
id "org.jetbrains.kotlin.jvm" version "1.3.72"
id "org.jetbrains.kotlin.kapt" version "1.3.72"
id "org.jetbrains.kotlin.plugin.allopen" version "1.3.72"
id "application"
id 'com.google.protobuf' version '0.8.13'
}
version "0.2"
group "account-control"
repositories {
mavenLocal()
jcenter()
}
configurations {
// for dependencies that are needed for development only
developmentOnly
}
dependencies {
kapt(enforcedPlatform("io.micronaut:micronaut-bom:$micronautVersion"))
kapt("io.micronaut:micronaut-inject-java")
kapt("io.micronaut:micronaut-validation")
implementation(enforcedPlatform("io.micronaut:micronaut-bom:$micronautVersion"))
implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8:${kotlinVersion}")
implementation("org.jetbrains.kotlin:kotlin-reflect:${kotlinVersion}")
implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:$kotlinxCoroutinesVersion")
implementation("io.micronaut:micronaut-runtime")
// implementation("io.micronaut.grpc:micronaut-grpc-runtime")
implementation("io.micronaut.grpc:micronaut-grpc-server-runtime:$micronautGrpcVersion")
implementation("io.micronaut.grpc:micronaut-grpc-client-runtime:$micronautGrpcVersion")
implementation("io.grpc:grpc-kotlin-stub:${grpcKotlinVersion}")
//Kafka
implementation("io.micronaut.kafka:micronaut-kafka")
//vertx
implementation("io.micronaut.sql:micronaut-vertx-mysql-client")
//implementation("io.micronaut.configuration:micronaut-vertx-mysql-client")
compile 'io.vertx:vertx-lang-kotlin:3.9.4'
//mongodb
implementation("org.mongodb:mongodb-driver-reactivestreams:4.1.1")
runtimeOnly("ch.qos.logback:logback-classic:1.2.3")
runtimeOnly("com.fasterxml.jackson.module:jackson-module-kotlin:2.9.8")
kaptTest("io.micronaut:micronaut-inject-java")
testImplementation enforcedPlatform("io.micronaut:micronaut-bom:$micronautVersion")
testImplementation("org.junit.jupiter:junit-jupiter-api:5.3.0")
testImplementation("io.micronaut.test:micronaut-test-junit5")
testImplementation("org.mockito:mockito-junit-jupiter:2.22.0")
testRuntime("org.junit.jupiter:junit-jupiter-engine:5.3.0")
testRuntime("org.jetbrains.spek:spek-junit-platform-engine:1.1.5")
}
test.classpath += configurations.developmentOnly
mainClassName = "account-control.Application"
test {
useJUnitPlatform()
}
allOpen {
annotation("io.micronaut.aop.Around")
}
compileKotlin {
kotlinOptions {
jvmTarget = '11'
//Will retain parameter names for Java reflection
javaParameters = true
}
}
//compileKotlin.dependsOn(generateProto)
compileTestKotlin {
kotlinOptions {
jvmTarget = '11'
javaParameters = true
}
}
tasks.withType(JavaExec) {
classpath += configurations.developmentOnly
jvmArgs('-XX:TieredStopAtLevel=1', '-Dcom.sun.management.jmxremote')
}
sourceSets {
main {
java {
srcDirs 'build/generated/source/proto/main/grpc'
srcDirs 'build/generated/source/proto/main/grpckt'
srcDirs 'build/generated/source/proto/main/java'
}
}
}
protobuf {
protoc { artifact = "com.google.protobuf:protoc:${protocVersion}" }
plugins {
grpc { artifact = "io.grpc:protoc-gen-grpc-java:${grpcVersion}" }
grpckt { artifact = "io.grpc:protoc-gen-grpc-kotlin:${grpcKotlinVersion}" }
}
generateProtoTasks {
all()*.plugins {
grpc {}
grpckt {}
}
}
}
in case it is relevant, here is a piece of the stub autogenerated by proto
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: transaction.proto
package com.mybank.endpoint;
/**
* Protobuf type {#code com.mybank.endpoint.DebitRequest}
*/
public final class DebitRequest extends
com.google.protobuf.GeneratedMessageV3 implements
// ##protoc_insertion_point(message_implements:com.mybank.endpoint.DebitRequest)
DebitRequestOrBuilder {
private static final long serialVersionUID = 0L;
// Use DebitRequest.newBuilder() to construct.
private DebitRequest(com.google.protobuf.GeneratedMessageV3.Builder<?> builder) {
super(builder);
}
private DebitRequest() {
name_ = "";
}
#java.lang.Override
#SuppressWarnings({"unused"})
protected java.lang.Object newInstance(
UnusedPrivateParameter unused) {
return new DebitRequest();
}
#java.lang.Override
public final com.google.protobuf.UnknownFieldSet
getUnknownFields() {
return this.unknownFields;
}
private DebitRequest(
com.google.protobuf.CodedInputStream input,
com.google.protobuf.ExtensionRegistryLite extensionRegistry)
throws com.google.protobuf.InvalidProtocolBufferException {
this();
...
*** First edition
#KafkaClient(
id="debit-client",
acks = KafkaClient.Acknowledge.ALL,
properties = [Property(name = ProducerConfig.RETRIES_CONFIG, value = "5")]
//How add these two properties in order to use Protobuf Serializer
//kafka.producers.*.key-serializer
//kafka.producers.*.value-serializer
)
public interface DebitProducer {
#Topic("debit")
fun sendRequestMessage(#KafkaKey key: String?, message: DebitRequest?) {
}
*** Second Edition
I tried added the serializer via application.yaml
micronaut:
application:
name: account-control
grpc:
server:
port: 8082
kafka:
producers:
product-client:
value:
serializer: io.confluent.kafka.serializers.protobuf.KafkaProtobufSerializer
gradle.build
//kafka-protobuf-serializer
implementation("io.confluent:kafka-protobuf-serializer:6.0.0")
and now I get during gradle build
Execution failed for task ':extractIncludeProto'.
> Could not resolve all files for configuration ':compileProtoPath'.
> Could not resolve com.squareup.wire:wire-schema:3.2.2.
Required by:
project : > io.confluent:kafka-protobuf-serializer:6.0.0 > io.confluent:kafka-protobuf-provider:6.0.0
> The consumer was configured to find a component, preferably only the resources files. However we cannot choose between the following variants of com.squareup.wire:wire-schema:3.2.2:
- jvm-api
- jvm-runtime
- metadata-api
All of them match the consumer attributes:
- Variant 'jvm-api' capability com.squareup.wire:wire-schema:3.2.2 declares a component, packaged as a jar:
- Unmatched attributes:
- Provides release status but the consumer didn't ask for it
- Provides an API but the consumer didn't ask for it
- Provides attribute 'org.jetbrains.kotlin.platform.type' with value 'jvm' but the consumer didn't ask for it
- Variant 'jvm-runtime' capability com.squareup.wire:wire-schema:3.2.2 declares a component, packaged as a jar:
- Unmatched attributes:
- Provides release status but the consumer didn't ask for it
- Provides a runtime but the consumer didn't ask for it
- Provides attribute 'org.jetbrains.kotlin.platform.type' with value 'jvm' but the consumer didn't ask for it
- Variant 'metadata-api' capability com.squareup.wire:wire-schema:3.2.2:
- Unmatched attributes:
- Doesn't say anything about its elements (required them preferably only the resources files)
- Provides release status but the consumer didn't ask for it
- Provides a usage of 'kotlin-api' but the consumer didn't ask for it
- Provides attribute 'org.jetbrains.kotlin.platform.type' with value 'common' but the consumer didn't ask for it
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':extractIncludeProto'.
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:38)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:41)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:370)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:357)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:350)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.lambda$run$0(DefaultPlanExecutor.java:127)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:191)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:182)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:124)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
Caused by: org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException: Could not resolve all files for configuration ':compileProtoPath'.
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.rethrowFailure(DefaultConfiguration.java:1265)
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.access$1800(DefaultConfiguration.java:141)
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.visitContents(DefaultConfiguration.java:1242)
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration$ConfigurationFileCollection.visitContents(DefaultConfiguration.java:1235)
at org.gradle.api.internal.artifacts.configurations.DefaultConfiguration.visitContents(DefaultConfiguration.java:489)
at org.gradle.api.internal.file.AbstractFileCollection.visitStructure(AbstractFileCollection.java:265)
at org.gradle.api.internal.file.CompositeFileCollection.visitContents(CompositeFileCollection.java:152)
at org.gradle.api.internal.file.AbstractFileCollection.visitStructure(AbstractFileCollection.java:265)
at org.gradle.internal.fingerprint.impl.DefaultFileCollectionSnapshotter.snapshot(DefaultFileCollectionSnapshotter.java:50)
at org.gradle.internal.fingerprint.impl.AbstractFileCollectionFingerprinter.fingerprint(AbstractFileCollectionFingerprinter.java:47)
at ...
ultPlanExecutor.java:182)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:124)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)

Serving an entire directory tree with spray.io

I would like to translate this JS code to Scala, using spray.io.
How can I translate this line below to Scala using spray.io ?
app.use('/', express.static(path.join(__dirname, 'public')));
In other word, how can I serve an entire directory tree using spray.io ?
As comment above says Spray is deprecated. But directives are similar in akka-http. Here is what you probably need (getFromResourceDirectory in your case)
pathPrefix("docs") {
get {
path("swagger.json") {
getFromResource("swagger.json", ContentTypes.`application/json`)
} ~
(pathEnd | pathSingleSlash) {
redirect("docs/index.html", StatusCodes.TemporaryRedirect)
} ~
getFromResourceDirectory("swagger-ui")
}
}
This serves files (recursively) from the directory ./web/
package com.softwaremill.spray.server
import akka.actor.ActorSystem
import spray.routing.SimpleRoutingApp
object Step1Complete extends App with SimpleRoutingApp {
implicit val actorSystem = ActorSystem()
startServer(interface = "localhost", port = 3300) {
get {
path("hello") {
complete {
"Welcome to Amber Gold!"
}
}
} ~
pathPrefix("web" ) {
getFromDirectory("./web/")
}
}
}

When does Play load application.conf?

Is application.conf already loaded when the code in Global.scala is executed? I'm asking because I've tried to read some configuration items from Global.scala and I always get None. Is there any workaround?
In Java it's available beforeStart(Application app) already
public class Global extends GlobalSettings {
public void beforeStart(Application app) {
String secret = Play.application().configuration().getString("application.secret");
play.Logger.debug("Before start secret is: " + secret);
super.beforeStart(app);
}
}
As it's required to i.e. configuring DB connection, most probably Scala works the same way (can't check)
Here below is how to read the configuration just after it has been loaded but before the application actually starts:
import play.api.{Configuration, Mode}
import play.api.GlobalSettings
import java.io.File
import utils.apidocs.InfoHelper
object Global extends GlobalSettings {
override def onLoadConfig(
config: Configuration,
path: File, classloader:
ClassLoader,
mode: Mode.Mode): Configuration = {
InfoHelper.loadApiInfo(config)
config
}
}
And here below, just for your info, is the source of InfoHelper.loadApiInfo – it just loads API info for Swagger UI:
package utils.apidocs
import play.api.Configuration
import com.wordnik.swagger.config._
import com.wordnik.swagger.model._
object InfoHelper {
def loadApiInfo(config: Configuration) = {
config.getString("application.name").map { appName =>
config.getString("application.domain").map { appDomain =>
config.getString("application.emails.apiteam").map { contact =>
val apiInfo = ApiInfo(
title = s"$appName API",
description = s"""
Fantastic application that makes you smile. You can find our
more about $appName at $appDomain.
""",
termsOfServiceUrl = s"//$appDomain/terms",
contact = contact,
license = s"$appName Subscription and Services Agreement",
licenseUrl = s"//$appDomain/license"
)
ConfigFactory.config.info = Some(apiInfo)
}}}
}
}
I hope it helps.

How to test remote (production) Scalatra web service with Specs2?

I'm using Specs2 to test my Scalatra web service.
class APISpec extends ScalatraSpec {
def is = "Simple test" ^
"invalid key should return status 401" ! root401^
addServlet(new APIServlet(),"/*")
def root401 = get("/payments") {
status must_== 401
}
}
This tests the web service locally (localhost). Now I would like to perform the same tests to the production Jetty server. Ideally, I would be able to do this by only changing some URL. Is this possible at all ? Or do I have to write my own (possible duplicate) testing code for the production server?
I don't know how Scalatra manages its URLs but one thing you can do in specs2 is control parameters from the command-line:
class APISpec extends ScalatraSpec with CommandLineArguments { def is = s2"""
Simple test
invalid key should return status 401 $root401
${addServlet(new APIServlet(),s"$baseUrl/*")}
"""
def baseUrl = {
// assuming that you passed 'url www.production.com' on the command line
val args = arguments.commandLine.split(" ")
args.zip(args.drop(1)).find { case (name, value) if name == "url" => value }.
getOrElse("localhost:8080")
}
def root401 = get(s"$baseUrl/payments") {
status must_== 401
}
}