My project setup looks as follows:
I would like to dynamically fill in application.conf values.
These values should be read from the correct properties file (${env}.props.properties). The correct properties file depends on the property env which is given with a run or build command (Denv=xxx).
application.conf
key=${my.property.value.read.from.props.properties.file}
key2=...
Thanks in advance!
You can tell Typesafe Config to load a different config file altogether by specifying flag -Dconfig.resource=your.file.properties as you run your application. If the config file is not a bundled resource you can use -Dconfig.file=/path/to/your.file.properties instead. (You can also specify an URL with -Dconfig.url; see https://github.com/typesafehub/config#user-content-standard-behavior for more info)
Doing this will skip loading application.conf altogether so remember to set Play!-specific properties in your own properties-file.
You can try Typesafe ConfigFactory.invalidateCaches to invalid config entries. As api doc says; first make the changes then call above api, followed by load() (one solution would be to have a scheduler that calls it every x interval).
Disclaimer - I haven't tried it myself
https://lightbend.github.io/config/latest/api/com/typesafe/config/ConfigFactory.html#invalidateCaches--
Related
I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.
I could set http.port in applicaton.conf by using playframework1.2.7
like this
http.port = 9020
jpda.port = 8020
also jdpa.port.
But in play2.4.
I cannot set http.port in application.conf like this.
I know that I can do like this when I run this project.
activator "run 9020"
But it is too troublesome for me.
If you have some ideas,
please share your idea.
You cannot specify port in aaplication.conf during run mode (but this can be used while deploying).
In run mode the HTTP server part of Play starts before the application has been compiled. This means that the HTTP server cannot access the application.conf file when it starts. If you want to override HTTP server settings while using the run command you cannot use the application.conf file. Instead, you need to either use system properties or the devSettings setting shown above.
Source: https://www.playframework.com/documentation/2.4.x/Configuration#HTTP-server-settings-in-application.conf
Also look at full server configuration options
https://www.playframework.com/documentation/2.4.x/ProductionConfiguration#Server-configuration-options
If there are multiple sling configuration nodes with the same name in CRX and if I invoke configAdmin.getConfiguration in my OSGi Service, which config value would it pick? I have mulitple config directories under apps like config.qa, config.local, config etc. with have the same config node. How do I make CQ5 pick config.qa instead of config? I did add the property sling.run.mode=publish,qa in sling.properties file. It is still picking up the properties defined under config folder instead of config.qa. Why isn't it picking the props from the config.qa folder like the documentation at http://docs.adobe.com/docs/en/cq/5-4/deploying/configuring_osgi.html?
It should always pick one with most match. For example if you have Run Mode as author,dev,intranet and then you have configs like config.author, config.dev, config.intranet and config.dev.intranet then in that case config.dev.intranet will be chosen. Make sure that you override your common config across this folder to make this work. Please check http://www.wemblog.com/2012/10/how-to-work-with-configurations-in-cq.html for more detail.
Yogesh
Consider following files:
application.conf
app {
port = 5000
}
reference.conf
akka {
cluster {
seed-nodes = ["akka.tcp://sysName#localhost:"${app.port}]
}
}
So when I run ConfigFactory.load() it fails, because ${app.port} does not present in reference.conf.
But load algorithm is clear - reference.conf is loaded and merged with application.conf. Is there a way to load application.conf and "include" reference.conf into it?
IMPORTANT
I tried add include "reference.conf" at the first line in application.conf it does not help.
You can use file("") syntax to include file.
include file("reference.conf")
see https://github.com/typesafehub/config/blob/master/HOCON.md#include-syntax
According to Typesafe Config documentation:
The implication of this is that the reference.conf stack has to be
self-contained; you can't leave an undefined value ${foo.bar} to be
provided by application.conf, or refer to ${foo.bar} in a way that you
want to allow application.conf to override. However, application.conf
can refer to a ${foo.bar} in reference.conf.
The documentation also lists some possible workarounds:
putting an application.conf in a library jar, alongside the reference.conf, with values intended for later resolution.
putting some logic in code instead of building up values in the config itself.
These are my configuration files for both development and testing environments. I'm displaying only the db configuration section.
dev.conf
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://localhost/mydb"
db.default.user=admin
db.default.password=admin
applyEvolutions.default=true
evolutionplugin=disabled
test.conf
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://localhost/mytestdb"
db.default.user=admin
db.default.password=admin
applyEvolutions.default=true
evolutionplugin=enabled
Basically I'm planning to execute the evolutions db script only to the testing database. So I will clean up the records before triggering the test-script.
Based on the documentation the evolution scripts has to be put in folder with the same name as the datasource, which is default in this case:
~/conf/evolutions/default/
My question:
Is there a way for me to put the scripts in different location and set the configuration file to refer to that one instead? I'd love to put the test scripts in this path:
~/conf/evolutions/test/
It'll be troublesome for me if in one way or another someone accidentally enable the evolutions in the dev.conf file and since both configuration files share the same datasource name(default) then all the clean-up queries in the default folder are executed.
Another workaround that I can think of right now is by using different datasource name for different environments but this will imply code change because then the application doesn't use the default datasource anymore. I'd like to avoid that.
Maybe you could use the evolutions logic directly from a text fixture of some kind?
play.api.db.evolutions.Evolutions.applyFor(dbName, path) seems like it might do the trick.