Setting DNS lookup's TimeToLive in Scala Play - scala

I am trying to set the TimeToLive setting for DNS Lookup in my Scala-Play application. I use Play 2.5.9 and Scala 2.11.8 and follow the AWS guide. I tried the following ways:
in application.conf
// Set DNS lookup time-to-live to one minute
networkaddress.cache.ttl=1
networkaddress.cache.negative.ttl=1
in AppModule or EagerSingleton (the code would be similar)
class AppModule() extends AbstractModule {
Security.setProperty("networkaddress.cache.ttl", "1")
Security.setProperty("networkaddress.cache.negative.ttl", "1")
...
}
passing as environment variable:
sbt -Dsun.net.inetaddr.ttl=1 clean run
I have the following piece of test code in the application:
for (i <- 1 to 25) {
System.out.println(java.net.InetAddress.getByName("google.com").getHostAddress())
Thread.sleep(1000)
}
This always prints the same IP address, e.g. 216.58.212.206. To me it looks like none of the approaches specified above have any effect. However, maybe I am testing something else and not actually the value of TTL. Therefore, I have two questions:
what is the correct way to pass a security variable into a Play application?
how to test it?

To change the settings for DNS cache via java.security.Security you have to provide a custom application loader.
package modules
class ApplicationLoader extends GuiceApplicationLoader {
override protected def builder(context: Context): GuiceApplicationBuilder = {
java.security.Security.setProperty("networkaddress.cache.ttl", "1")
super.builder(context)
}
}
When you build this application loader you can enable it in your application.conf
play.application.loader = "modules.ApplicationLoader"
after that you could use your code above and check if the DNS cache is behaving like you set it up. But keep in mind that your system is accessing a DNS server which is caching itself so you wont see change then.
If you want to be sure that you get different addresses for google.com you should use an authority name server like ns1.google.com
If you want to write a test on that you could maybe write a test which requests the address and then waits for the specified amount of time until it resolves again. But with a DNS system out of your control like google.com this could be a problem, if you hit a DNS server with caching.
If you want to write such a check you could do it with
#RunWith(classOf[JUnitRunner])
class DnsTests extends FlatSpec with Matchers {
"DNS Cache ttl" should "refresh after 1 second"
in new WithApplicationLoader(new modules.ApplicationLoader) {
// put your test code here
}
}
As you can see you can put the custom application loader in the context of the application starting behind your test.

Related

How do I overwrite config settings from an included file

I have defined some akka remote settings in my application.conf:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
warn-about-java-serializer-usage = false
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "myhost"
port = 2561
maximum-frame-size = 256000b
}
}
}
But then I have another program that needs to access other configuration settings from my application.conf. But I need to ignore the akka config settings. So I've tried the following for the second program:
include "application"
akka {}
But the akka settings from application.conf are still being applied. I know this because I get a bind exception on the akka port eventhough there should be no remote akka in my second app
What is the best way for me clear/ignore the akka config settings from my application.conf?
Let's say you want to override the akka.remote.netty.tcp.port in your another.conf, you simply
include "application.conf"
akka.remote.netty.tcp.port = 2562
It will override the netty tcp port while leaving the rest unchanged and inherited
After some experimentation, the best bet IMO is to factor out your application.conf so that your common settings are in their own conf files.
So you might put the common settings in a common-settings.conf and then in your Akka application you would have
include "common-settings"
akka {
// Akka settings here
}
And the other modules that need common-settings can just, in their application.conf:
include "common-settings"
This may work better with a multi-module build. If they're the same, e.g., sbt module, then you'll probably replace the canonical application.conf with akka-application.conf and other-application.conf and point your ActorSystem setup code to akka-application.conf instead of application.conf (which probably shouldn't exist in this scenario, as you'd want bare ConfigFactory.load() calls to fail very quickly (the alternative here is to have different programs fighting over who owns application.conf).
The issue you are facing is actually by design. From the HOCON first page documentation:
Duplicate keys are allowed; later values override earlier, except for object-valued keys where the two objects are merged recursively
Therefore, when you add in your second file an akka {} it is just being merged, and not overwritten.
As I can see it, you have 2 options.
Copying the configuration, and override all properties with the one you actually want. That means, that in the second program, you can add:
akka {
actor {
provider = "new value"
warn-about-java-serializer-usage = false
}
remote {
enabled-transports = ["completely new values"]
netty.tcp {
hostname = "etc..."
port = 2561
maximum-frame-size = 256000b
}
}
}
The other option, which I like less, is to overwrite the object called akka. For doing that, you need to assign to it something that is not an object. Otherwise it will just be merged. For instance, if you add the the second program akka=4, so it will completely remove all of the other values. But! In this case, you have your program to deal with those properties to be missing. That means, that somewhere in your code you will have to write something like (don't forget that config throws on missing):
Try(config.getString("akka.actor.provider")).getOrElse(Do something here)
You have to do that because now akka is a string, and you cannot look into that as an object.

Authenticate with ECE ElasticSearch Sink from Apache Fink (Scala code)

Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.

Why does my actorFor fail when deployed to a Akka Microkernel JAR?

Have a somewhat simple project deployed to a JAR. I am starting up a supervisor actor that confirms it is booting up by sending out the following log message:
[akka://service-kernel/user/Tracker] Starting new Tracker
However, when I go to reference the actor via actorFor locally with an sbt run, it finds it no problem. In production, I use the same .actorFor("akka://service-kernel/user/Tracker") and it throws a NullPointerException. I can confirm via the logs that in production, the Tracker has sent out its confirmation that it booted up.
Are there any issues when using a Microkernel deployed to a JAR to make actor references?
Edit
I am suspecting that both the way I reference the system and the way Akka treats the start up class are related to the issue. Since I have specified a start up class called ServiceKernel, I am performing the reference as such: ServiceKernel.system.actorFor. Will provide an answer if confirmed.
Confirmed that it was related to the startup class handling the Microkernel.
The ServiceKernel mentioned above is used in the start script to boot up the Microkernel JAR: ./start com.package.ServiceKernel. In an sbt shell, this isn't needed so the alternative class I provided works well for referencing an Actor System.
However, in a Microkernel the ServiceKernel appears to be using a different Actor System altogether, so if you reference that system (like I did) then actorFor lookups will always fail. I solved the problem by passing the system down into the boot classes into the specific class where I was making the actorFor reference and it worked. Did it like this (pseudo-code):
class ServiceKernel extends Bootable {
val system = ActorSystem("service-kernel")
def startup = {
system.actorOf(Props(new Boot(isDev, system))) ! Start
}
}
And then passing it to an HttpApi class:
class Boot(val isDev: Boolean, system: ActorSystem) extends Actor with SprayCanHttpServerApp {
def receive = {
case Start =>
// setup HTTP server
val service = system.actorOf(Props(new HttpApi(system)), "tracker-http-api")
}
}

Code coverage on Play! project

I have a Play! project where I would like to add some code coverage information. So far I have tried JaCoCo and scct. The former has the problem that it is based on bytecode, hence it seems to give warning about missing tests for methods that are autogenerated by the Scala compiler, such as copy or canEqual. scct seems a better option, but in any case I get many errors during tests with both.
Let me stick with scct. I essentially get errors for every test that tries to connect to the database. Many of my tests load some fixtures into an H2 database in memory and then make some assertions. My Global.scala contains
override def onStart(app: Application) {
SessionFactory.concreteFactory = Some(() => connection)
def connection() = {
Session.create(DB.getConnection()(app), new MySQLInnoDBAdapter)
}
}
while the tests usually are enclosed in a block like
class MySpec extends Specification {
def app = FakeApplication(additionalConfiguration = inMemoryDatabase())
"The models" should {
"be five" in running(app) {
Fixtures.load()
MyModels.all.size should be_==(5)
}
}
}
The line running(app) allows me to run a test in the context of a working application connected to an in-memory database, at least usually. But when I run code coverage tasks, such as scct coverage:doc, I get a lot of errors related to connecting to the database.
What is even more weird is that there are at least 4 different errors, like:
ObjectExistsException: Cache play already exists
SQLException: Attempting to obtain a connection from a pool that has already been shutdown
Configuration error [Cannot connect to database [default]]
No suitable driver found for jdbc:h2:mem:play-test--410454547
Why is that launching tests in the default configuration is able to connect to the database, while running in the context of scct (or JaCoCo) fails to initialize the cache and the db?
specs2 tests run in parallel by default. Play disables parallel execution for the standard unit test configuration, but scct uses a different configuration so it doesn't know not to run in parallel.
Try adding this to your Build.scala:
.settings(parallelExecution in ScctPlugin.ScctTest := false)
Alternatively, you can add sequential to the beginning of your test classes to force all possible run configurations to run sequentially. I've got both in my files still, as I think I had some problems with the Build.scala solution at one point when I was using an early release candidate of Play.
A better option for Scala code coverage is Scoverage which gives statement line coverage.
https://github.com/scoverage/scalac-scoverage-plugin
Add to project/plugins.sbt:
addSbtPlugin("com.sksamuel.scoverage" % "sbt-scoverage" % "1.0.1")
Then run SBT with
sbt clean coverage test
You need to add sequential in the beginning of your Specification.
class MySpec extends Specification {
sequential
"MyApp" should {
//...//
}
}

grailsApplication not getting injected in a service , Grails 2.1.0

I have service in which i am accessing few configuration properties from grailsApplication
I am injecting it like this
class MyWebService{
def grailsApplication
WebService webService = new WebService()
def getProxy(url, flag){
return webService.getClient(url)
}
def getResponse(){
def proxy = getProxy(grailsApplication.config.grails.wsdlURL, true)
def response = proxy.getItem(ItemType)
return response
}
}
When i call getProxy() method, i see this in tomcat logs
No signature of method: org.example.MyWebService.getProxy() is applicable for argument types: (groovy.util.ConfigObject, java.lang.Boolean) values: [[:], true]
Possible solutions: getProxy(), getProxy(java.lang.String, boolean), setProxy(java.lang.Object)
which means grailsApplication is not getting injected into the service, is there any alternate way to access configuration object ? according to burtbeckwith's post configurationholder has been deprecated, can't think of anything else.
Interestingly the very same service works fine in my local IDE(GGTS 3.1.0), that means locally grailsApplication is getting injected, but when i create a war to deploy to a standalone tomcat, it stops getting injected.
I seem to have figured out the problem, actually grailsApplication is getting injected properly otherwise it would have thrown a null pointer exception, i feel the configuration properties are not getting added. Actually the scenario is like, i have a separate custom configuration file which holds configuration data for different environments, my application listens to the environement type(a variable which is set from tomcat) and based on that merges the corresponding config properties from my custom configuration file. i think those propreties are probably not getting added