I'm using sbt-dynamodb plugin run local AWS DynamoDB for my integration tests. To set up a AWS SDK client I need to provide a client endpoit as
val sdkClient = new AmazonDynamoDBAsyncClient(awsCreds)
sdkClient.setEndpoint("http://localhost:8000")
The plugin allows to configure the DB port number in build.sbt as
DynamoDBLocal.Keys.dynamoDBLocalPort := Some(8080)
How to access that sbt key (port number) from my test code? To do something like:
val dbPort = ??? // get from Keys.dynamoDBLocalPort somehow
sdkClient.setEndpoint("http://localhost:" + dbPort")
Thanks!
Related
I am running SBT 1.2.8 and my project needs to download packages from a repo on a privately hosted Artifactory instance. My repo is protected by basic auth. After reading a multitude of examples and instructions, I created a credentials.properties file in my repo.
realm=Artifactory Realm
host=artifactory.mycompany.com
username=my_username
password=my_password
I then added the following to my build.sbt file
credentials += Credentials(new File("credentials.properties"))
Then I added the repository to my list of resolvers in resolvers.sbt
"My Company Artifactory" at "https://artifactory.mycompany.com/artifactory/my_private_repo/",
I built my application and was able to download the protected packages just fine.
However, a system administrator at my company requested I turn on the “Hide Existence of Unauthorized Resources” setting in Artifactory. This setting forces Artifactory to return 404 errors when an unauthenticated user tries to access protected resources. Usually in this case, Artifactory returns 401s with a WWW-Authenticate header.
Suddenly, my application was unable to resolve its dependencies. I turned the Artifactory setting off and then back on again and verified this setting was, in fact, the cause of my problems.
It appears as though SBT will not send credentials unless it is challenged with a 401 and a WWW-Authenticate header (with the proper realm). Looking at the docs and GitHub issues for SBT, Ivy, and Coursier, it seems this “preemptive authentication” is not a supported feature.
I spend many hours trying to resolve this in various ways, but I cannot find a solution. Here is what I have tried:
Adding my Artifactory username and password to the repository url, so it looks like https://my_username:my_password#artifactory.mycompany.com/artifactory/my_private_repo/. This worked in my browser and a REST client, but not with SBT.
Omitting the “realm” from my credentials file
Switching to SBT 1.3.9 and trying everything above with the new version.
Does anyone know how I can get SBT to use preemptive HTTP basic auth? It appears both Maven and Gradle support this (see links below), but I cannot find anything in the SBT docs.
Maven support for preemptive auth: https://jfrog.com/knowledge-base/why-does-my-maven-builds-are-failing-with-a-404-error-when-hide-existence-of-unauthorized-resources-is-enabled/
Gradle support for preemptive auth:
https://github.com/gradle/gradle/pull/386/files
I'm almost thinking of setting up a local proxy to send the proper headers Artifactory, and point SBT to use the local proxy as a resolver. However, that seems needlessly cumbersome for developers to use.
you are correct.
You can setup an AbstractRepository. See https://github.com/SupraFii/sbt-google-artifact-registry/blob/master/src/main/scala/ch/firsts/sbt/gar/ArtifactRegistryRepository.scala#L21 for example:
package ch.firsts.sbt.gar
import java.io.File
import java.util
import com.google.cloud.artifactregistry.wagon.ArtifactRegistryWagon
import org.apache.ivy.core.module.descriptor.Artifact
import org.apache.ivy.plugins.repository.AbstractRepository
import org.apache.maven.wagon.repository.Repository
class ArtifactRegistryRepository(repositoryUrl: String) extends AbstractRepository {
val repo = new Repository("google-artifact-registry", repositoryUrl)
val wagon = new ArtifactRegistryWagon()
override def getResource(source: String): ArtifactRegistryResource = {
val plainSource = stripRepository(source)
wagon.connect(repo)
ArtifactRegistryResource(repositoryUrl, plainSource, wagon.resourceExists(plainSource))
}
override def get(source: String, destination: File): Unit = {
val adjustedSource = if (destination.toString.endsWith("sha1"))
source + ".sha1"
else if (destination.toString.endsWith("md5"))
source + ".md5"
else
source
wagon.connect(repo)
wagon.get(adjustedSource, destination)
}
override def list(parent: String): util.List[String] = sys.error("Listing repository contents is not supported")
override def put(artifact: Artifact, source: File, destination: String, overwrite: Boolean): Unit = {
val plainDestination = stripRepository(destination)
wagon.connect(repo)
wagon.put(source, plainDestination)
}
private def stripRepository(fullName: String): String = fullName.substring(repositoryUrl.length + 1)
}
I am using flink 1.8.0 and I am trying to query my job state.
val descriptor = new ValueStateDescriptor("myState", Types.CASE_CLASS[Foo])
descriptor.setQueryable("my-queryable-State")
I used port 9067 which is the default port according to this, my client:
val client = new QueryableStateClient("127.0.0.1", 9067)
val jobId = JobID.fromHexString("d48a6c980d1a147e0622565700158d9e")
val execConfig = new ExecutionConfig
val descriptor = new ValueStateDescriptor("my-queryable-State", Types.CASE_CLASS[Foo])
val res: Future[ValueState[Foo]] = client.getKvState(jobId, "my-queryable-State","a", BasicTypeInfo.STRING_TYPE_INFO, descriptor)
res.map(_.toString).pipeTo(sender)
but I am getting :
[ERROR] [06/25/2019 20:37:05.499] [bvAkkaHttpServer-akka.actor.default-dispatcher-5] [akka.actor.ActorSystemImpl(bvAkkaHttpServer)] Error during processing of request: 'org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9067'. Completing with 500 Internal Server Error response. To change default exception handling behavior, provide a custom ExceptionHandler.
java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /127.0.0.1:9067
what am I doing wrong ?
how and where should I define QueryableStateOptions
So If You want to use the QueryableState You need to add the proper Jar to Your flink. The jar is flink-queryable-state-runtime, it can be found in the opt folder in Your flink distribution and You should move it to the lib folder.
As for the second question the QueryableStateOption is just a class that is used to create static ConfigOption definitions. The definitions are then used to read the configurations from flink-conf.yaml file. So currently the only option to configure the QueryableState is to use the flink-conf file in the flink distribution.
EDIT: Also, try reading this]1 it provides more info on how does Queryable State works. You shouldn't really connect directly to the server port but rather You should use the proxy port which by default is 9069.
Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.
I'm trying to integrate a remote configurations REST based service as part of my project's configuration fallbacks.
I've been using TypesafeConfig up to now, with a simple file-based approach:
lazy val appConf = ConfigFactory.parseResources(Constants.APP_CONF_FILE)
but now, I'm looking for a way to do something like this:
lazy val remoteRESTConfigService: Config = ConfigFactory.parseResources("https://example.com/remote_config_api")
lazy val fileConf: Config = ConfigFactory.parseResources(Constants.APP_CONF_FILE)
lazy val appConf: Config = fileConf.withFallback(remoteRESTConfigService)
Does anyone know how this can be done?
I am trying to get Integration Tests to work in Play Framework 2.1.1
My goal is to be able to run Integration Tests after unit tests to test component level functionality with the underlying database. The underlying database will have stored procedures, so it is essential that I am able to do more than just the "inMemoryDatabase" that you can configure in the Fake Application.
I would like the process to be:
Start TestServer with FakeApplication. Use an alternative integration test conf file
Run Integration tests (can filter by package or what not here)
Stop TestServer
I believe the best way to do this is in the Build.scala file.
I need assistance for how to setup the Build.scala file, as well as how to load the alternative integration test config file (project/it.conf right now)
Any assistance is greatly appreciated!
I have introduced a method that works for the time being. I would like to see Play introduce the concept of separate "Test" vs "IntegrationTest" scopes in sbt.
I could go into Play and see how they build their project and settings in sbt and try to get IntegrationTest scope to work. Right now, I spent too much time trying to get it functioning.
What I did was to create a Specs Around Scope class that gives me the ability to enforce a singleton instance of a TestServer. Anything that uses the class will attempt to start the test server, if it is already running, it won't be restarted.
It appears as though Play and SBT do a good job of making sure the server is shut down when the test ends, which works so far.
Here is the sample code. Still hoping for some more feedback.
class WithTestServer(val app: FakeApplication = FakeApplication(),
val port: Int = Helpers.testServerPort) extends Around with Scope {
implicit def implicitApp = app
implicit def implicitPort: Port = port
synchronized {
if ( !WithTestServer.isRunning ) {
WithTestServer.start(app, port)
}
}
// Implements around an example
override def around[T: AsResult](t: => T): org.specs2.execute.Result = {
println("Running test with test server===================")
AsResult(t)
}
}
object WithTestServer {
var singletonTestServer: TestServer = null
var isRunning = false
def start(app: FakeApplication = FakeApplication(), port: Int = Helpers.testServerPort) = {
implicit def implicitApp = app
implicit def implicitPort: Port = port
singletonTestServer = TestServer(port, app)
singletonTestServer.start()
isRunning = true
}
}
To take this a step further, I simply setup two folders (packages) in the play/test folder:
- test/unit (test.unit package)
- test/integration (test.integration pacakage)
Now, when I run from my Jenkins server, I can run:
play test-only test.unit.*Spec
That will execute all unit tests.
To run my integration tests, I run:
play test-only test.integration.*Spec
That's it. This works for me for the time being until Play adds Integration Test as a lifecycle step.
The answer for this is shared in this blog post https://blog.knoldus.com/integration-test-configuration-in-play-framework/
Basically, in build.sbt:
// define a new configuration
lazy val ITest = config("it") extend(Test)
/// and add it to your project:
lazy val yourProject = (project in file("yourProject"))
.configs(ITest)
.settings(
inConfig(ITest)(Defaults.testSettings),
ITest / scalaSource := baseDirectory.value / "it",
[the rest of your configuration comes here])
.enablePlugins(PlayScala)
Just tested this in 2.8.3 and works like a charm.
Lauch your ITs from sbt using:
[yourProject] $ it:test