grailsApplication not getting injected in a service , Grails 2.1.0 - service

I have service in which i am accessing few configuration properties from grailsApplication
I am injecting it like this
class MyWebService{
def grailsApplication
WebService webService = new WebService()
def getProxy(url, flag){
return webService.getClient(url)
}
def getResponse(){
def proxy = getProxy(grailsApplication.config.grails.wsdlURL, true)
def response = proxy.getItem(ItemType)
return response
}
}
When i call getProxy() method, i see this in tomcat logs
No signature of method: org.example.MyWebService.getProxy() is applicable for argument types: (groovy.util.ConfigObject, java.lang.Boolean) values: [[:], true]
Possible solutions: getProxy(), getProxy(java.lang.String, boolean), setProxy(java.lang.Object)
which means grailsApplication is not getting injected into the service, is there any alternate way to access configuration object ? according to burtbeckwith's post configurationholder has been deprecated, can't think of anything else.
Interestingly the very same service works fine in my local IDE(GGTS 3.1.0), that means locally grailsApplication is getting injected, but when i create a war to deploy to a standalone tomcat, it stops getting injected.

I seem to have figured out the problem, actually grailsApplication is getting injected properly otherwise it would have thrown a null pointer exception, i feel the configuration properties are not getting added. Actually the scenario is like, i have a separate custom configuration file which holds configuration data for different environments, my application listens to the environement type(a variable which is set from tomcat) and based on that merges the corresponding config properties from my custom configuration file. i think those propreties are probably not getting added

Related

Apache Meecrowave OAuth2 JPA

I succeedeed in creating my own OAuth2 server using JCache as token store but I'm facing an issue when moving to JPA.
My configuration is :
"--users","test=test",
"--roles","test=test",
"--oauth2-provider","jpa",
"--oauth2-jpa-database-driver","org.h2.Driver",
"--oauth2-jpa-database-url","jdbc:h2:mem:oauth",
"--oauth2-jpa-database-username","sa",
"--oauth2-jpa-database-password",""
But I got exception below during OpenJPA bootstrap :
here was an error while setting up the configuration plugin option "MetaDataFactory".
The plugin was of type "org.apache.openjpa.persistence.jdbc.PersistenceMappingFactory".
Setter methods for the following plugin properties were not available in that type: [
org.apache.cxf.rs.security.oauth2.tokens.bearer.BearerAccessToken,
org.apache.cxf.rs.security.oauth2.common.OAuthPermission,
org.apache.cxf.rs.security.oauth2.tokens.refresh.RefreshToken,
org.apache.cxf.rs.security.oauth2.grants.code.ServerAuthorizationCodeGrant,
org.apache.cxf.rs.security.oauth2.common.UserSubject].
Possible plugin properties are:
[AnnotationParser, ClasspathScan, FieldOverride, Files, JAR_FILE_URLS, MAPPING_FILE_NAMES, MODE_ALL, MODE_ANN_MAPPING, MODE_MAPPING, MODE_MAPPING_INIT, MODE_META, MODE_NONE, MODE_QUERY, PERSISTENCE_UNIT_ROOT_URL, Repository, Resources, STORE_DEFAULT, STORE_PER_CLASS, STORE_VERBOSE, StoreDirectory, StoreMode, Strict, Types, URLs, XMLAnnotationParser, XMLParser].
Ensure that your plugin configuration string uses key values that correspond to setter methods in the plugin class.
I suppose I missed something in configuration...
Any help would be appreciated.
Tx
Using --oauth2-jpa-properties you can set any persistence unit properties you want, I guess you will have to override openjpa.MetaDataFactory default value which is set to jpa(Types=org.apache.cxf.rs.security.oauth2.common.Client,org.apache.cxf.rs.security.oauth2.common.OAuthPermission,org.apache.cxf.rs.security.oauth2.common.UserSubject,org.apache.cxf.rs.security.oauth2.grants.code.ServerAuthorizationCodeGrant,org.apache.cxf.rs.security.oauth2.tokens.bearer.BearerAccessToken,org.apache.cxf.rs.security.oauth2.tokens.refresh.RefreshToken).
You can also check if your configuration is properly propagated and if there is no classpath conflict (another persistence.xml with an oauth2 unit?) because I just retested and your configuration seems to work.
Romain

Authenticate with ECE ElasticSearch Sink from Apache Fink (Scala code)

Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.

Programmatically load openjpa javaagent - runtime optimization

I am writing unit test for testing JPA DAO and I would like to automatically load the javaagant (JPA class enhancer) without having to specify the -javaagent vm parameter.
To achieve this I implemented a #BeforeClass annotated method like this:
String nameOfRunningVM = ManagementFactory.getRuntimeMXBean().getName();
String pid = nameOfRunningVM.substring(0, nameOfRunningVM.indexOf('#'));
VirtualMachine vm = VirtualMachine.attach(pid);
vm.loadAgent("openjpa-all-2.4.2.jar");
vm.detach();
emf = Persistence.createEntityManagerFactory("TEST_DB");
But I still get the error telling entity classes were not enhanced when creating the entity manager factory.
org.apache.openjpa.persistence.ArgumentException: This configuration disallows runtime optimization, but the following listed types were not enhanced at build time or at class load time with a javaagent: ")
I can live with the -javaagent parameter but I am curious and would be pleased if anybody could share a solution or idea with us.
I am running my test with JUnit and Java 8

Setting DNS lookup's TimeToLive in Scala Play

I am trying to set the TimeToLive setting for DNS Lookup in my Scala-Play application. I use Play 2.5.9 and Scala 2.11.8 and follow the AWS guide. I tried the following ways:
in application.conf
// Set DNS lookup time-to-live to one minute
networkaddress.cache.ttl=1
networkaddress.cache.negative.ttl=1
in AppModule or EagerSingleton (the code would be similar)
class AppModule() extends AbstractModule {
Security.setProperty("networkaddress.cache.ttl", "1")
Security.setProperty("networkaddress.cache.negative.ttl", "1")
...
}
passing as environment variable:
sbt -Dsun.net.inetaddr.ttl=1 clean run
I have the following piece of test code in the application:
for (i <- 1 to 25) {
System.out.println(java.net.InetAddress.getByName("google.com").getHostAddress())
Thread.sleep(1000)
}
This always prints the same IP address, e.g. 216.58.212.206. To me it looks like none of the approaches specified above have any effect. However, maybe I am testing something else and not actually the value of TTL. Therefore, I have two questions:
what is the correct way to pass a security variable into a Play application?
how to test it?
To change the settings for DNS cache via java.security.Security you have to provide a custom application loader.
package modules
class ApplicationLoader extends GuiceApplicationLoader {
override protected def builder(context: Context): GuiceApplicationBuilder = {
java.security.Security.setProperty("networkaddress.cache.ttl", "1")
super.builder(context)
}
}
When you build this application loader you can enable it in your application.conf
play.application.loader = "modules.ApplicationLoader"
after that you could use your code above and check if the DNS cache is behaving like you set it up. But keep in mind that your system is accessing a DNS server which is caching itself so you wont see change then.
If you want to be sure that you get different addresses for google.com you should use an authority name server like ns1.google.com
If you want to write a test on that you could maybe write a test which requests the address and then waits for the specified amount of time until it resolves again. But with a DNS system out of your control like google.com this could be a problem, if you hit a DNS server with caching.
If you want to write such a check you could do it with
#RunWith(classOf[JUnitRunner])
class DnsTests extends FlatSpec with Matchers {
"DNS Cache ttl" should "refresh after 1 second"
in new WithApplicationLoader(new modules.ApplicationLoader) {
// put your test code here
}
}
As you can see you can put the custom application loader in the context of the application starting behind your test.

Camel Http4 2.12.2: "httpClientConfigurer" cannot be inferred from endpointUri

I'm using scala akka-camel with http4 component (2.12.2 version).
I'm creating a Camel producer with endpoint:
def endpointUri = "https4://host-path" +
"?bridgeEndpoint=true" +
"&httpClientConfigurer=#configurer" +
"&clientConnectionManager=#manager"
where configurer is an HttpClientConfigurer registered in Camel context registry (the same principle applies to manager).
When I'm sending a CamelMessage to that endpoint I can see at akka logs this:
DEBUG o.a.c.component.http4.HttpComponent - Creating endpoint uri https4://host-path?bridgeEndpoint=true&httpClientConfigurer=#configurer&clientConnectionManager=#manager
DEBUG o.a.camel.util.IntrospectionSupport - Configured property: clientConnectionManager on bean: Endpoint["https4://host-path?bridgeEndpoint=true&httpClientConfigurer=#configurer&clientConnectionManager=#manager"] with value: org.apache.http.impl.conn.PoolingClientConnectionManager#3da3d36f
DEBUG o.a.camel.util.IntrospectionSupport - Configured property: bridgeEndpoint on bean: Endpoint["https4://host-path?bridgeEndpoint=true&httpClientConfigurer=#configurer&clientConnectionManager=#manager"] with value: true
INFO o.a.c.component.http4.HttpComponent - Registering SSL scheme https on port 443
INFO o.a.c.component.http4.HttpComponent - Registering SSL scheme https4 on port 443
So httpClientConfigurer is not configured and I don't know why it's ignoring this parameter. I've been looking for any related issue at Apache Camel issue tracker but I have found nothing similar.
Any idea?
Thanks in advance.
Finally, it's resolved. I haven't used the none of clientConnectionManager or httpClientConfigurer. I've used SSLContextParams and a trait called TlsConfigurer that is meant to be mixed-in with a Producer.
I want to use different X509 certificates, so, as Camel suggests:
Important: Only one instance of org.apache.camel.util.jsse.SSLContextParameters is supported per HttpComponent. If you need to use 2 or more different instances, you need to define a new HttpComponent per instance you need.
Therefore, TlsConfigurer configure method must be able to get a http4 component instance from camel context and then apply the SSLContextParams and add the modified instance as a new component to camel context.
This is how it looks:
import org.apache.camel.component.http4.HttpComponent
import org.apache.camel.util.jsse._
trait TlsConfigurer {
self: {val camel: akka.camel.Camel} =>
def configure(
componentName: String,
keyStorePath:String,
trustStorePath:String,
password: String) {
val ksp = new KeyStoreParameters
ksp.setResource(keystorePath)
ksp.setPassword(password)
val kmp = new KeyManagersParameters
kmp.setKeyStore(ksp)
kmp.setKeyPassword(password)
val scp = new SSLContextParameters
scp.setKeyManagers(kmp)
val httpComponent =
camel.context.getComponent("http4",classOf[HttpComponent])
httpComponent.setSslContextParameters(scp)
camel.context.addComponent(componentName, httpComponent)
}
}
This way I can create two different end-points: http-client1://... and http-client2://... and manage their certificates in a separate way.
httpClientConfigurer is not set to the HttpEndpoint by using IntrospectionSupport, so you don't see the debug log.
I think we about to find out the configurer is called when you add some log in the customer configurer.