Configure logging programmatically in Scala/Play - scala

The Play framework requires (by default) that you configure logging through a logback.xml file. I'd like to build my log appenders through code so I can fetch parameters at runtime (e.g. the graylog destination for the logs is fetched from the deployment environment, rather than baking it in statically through an XML file).
This sort of thing is fairly easy to achieve in Java (by overriding logging factories and the like), I wondered if the same were possible in Play.

Yes, you can configure logback programmatically, see: https://akhikhl.wordpress.com/2013/07/11/programmatic-configuration-of-slf4jlogback/
But I wouldn't recommend it. For starters it's a verbose API that isn't pleasant to work with. Beyond that, it generally nice for configuration to be declarative (even if it is in XML in this case).
For your usecase, Logback's XML does support variables which can come from System properties or Environment variables: https://logback.qos.ch/manual/configuration.html#definingProps
However, you probably want a different config across environments (no greylog locally). I think many projects do that by specifying the logback XML location as a system property at startup: https://logback.qos.ch/manual/configuration.html#configFileProperty
Alternatively, I suspect greylog has some method of watching a file to pickup your logging. That's what we do for picking up logs in Splunk in my team. We don't want to make a change to our code when someone reconfigures Splunk/Greylog.

The solution I used in the end was to use a logback contextlistener to populate the context with the parameters pulled from the environment. The listener can be added as follows to the logback.xml:
<contextListener class="LoggerStartup"/>
The LoggerStartup can then populate the context, which I achieved through AWS SSM (see the simplified code below).
class LoggerStartup extends ContextAwareBase with LoggerContextListener with LifeCycle {
override def start() = {
val context = getContext()
val graylogUrl = ... // Go get value from remote store
context.putProperty("GRAYLOG_URL", graylogUrl)
}
}
And then referenced this context variable in the logback file:
<appender name="GELF UDP APPENDER" class="me.moocar.logbackgelf.GelfUDPAppender">
<remoteHost>${GRAYLOG_URL}</remoteHost>
...
</appender>

Related

Using environment variables to configure Docker deployment of Lagom Scala application

We're developing several Lagom-based Scala micro-services. They are configured using variable replacement in application.conf, eg.
mysql = {
url = "jdbc:mysql://"${?ENV_MYSQL_DATABASE_URL}
During development, we set these variables as Java System Properties via a env.sbt file that calls System.setProperty("ENV_MYSQL_DATABASE_URL", url). This is working fine.
Now I want to deploy this in a container to my local Docker installation. We are using the SbtReactiveAppPlugin to build the Docker image from build.sbt and simply run sbt Docker/publishLocal. This works as expected, a Docker image is created and I can fire it up.
However, passing in environment variables using the standard docker or docker-compose mechanisms does not seem to work. While I can see that the environment variables are set correctly inside the Docker container (verified using env on a bash and also by doing log.debug("ENV_MYSQL_DATABASE_URL via env: " + sys.env("ENV_MYSQL_DATABASE_URL")) inside the service), they are not used by the application.conf and not available in the configuration system. The values are empty/unset (verified through configuration.getString("ENV_MYSQL_DATABASE_URL").toString() and the exceptions thrown by the mysql system and other systems).
The only way I've gotten it to work was by fudging this into the JAVA_OPTS via JAVA_OPTS=-D ENV_MYSQL_DATABASE_URL=..... However, this seems like a hack, and doesn't appear to scale very well with dozens of environment parameters.
Am I missing something, is there a way to easily use the environment variables inside the Lagom application and application.conf?
Thanks!
I've used Lightbend config to configure Lagom services via environment variables in docker containers for many years, so know that it can be done and has been pretty straightforward in my experience.
With that in mind, when you say that they're not used by application.conf, do you mean that they're unset? Note that unless you're passing a very specific option as a Java property, configuration.getString("ENV_MYSQL_DATABASE_URL") will not read from an environment variable, so checking that will not tell you anything about whether mysql.url is affected by the environment variable. configuration.getString("mysql.url") will give you a better idea of what's going on.
I suspect that in fact your Docker image is being built with the dev-mode properties hardcoded in, and since Java system properties take precedence over everything else, they're shadowing the environment variable.
You may find it useful to structure your application.conf along these lines:
mysql_database_url = "..." # Some reasonable default default for dev-mode
mysql_database_url = ${?ENV_MYSQL_DATABASE_URL}
mysql {
url = "jdbc://"${mysql_database_url}
}
In this case, you have a reasonable default for a developer (probably including in the docs some instructions for running MySQL in a way compatible with that configuration). The default can then be overridden via setting a Java property (e.g. JAVA_OPTS=-Dmysql_database_url) or by setting the ENV_MYSQL_DATABASE_URL environment variable.
While I agree with the answer provided by Levi Ramsey, I would suggest you to use typesafe's config to load the your config

Creating and using a custom kafka connect configuration provider

I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.

How to include a deployment.properties file of environment variables in WSO2 Identity Server?

I want to include a properties file of environment variables to better integrate between environments in AWS deployment of WSO2 Identity Server. I could put all the environment variables in line in the wso2server.sh, but it would be better to inject a properties file that has all the variables I need.
I am trying to include:
-Ddeployment.conf="$CARBON_HOME/repository/conf/etc/dev-env.properties" \
in the wso2server.sh where my dev-env.properties has variables that I want to include in the xml configurations. An example being the usr-mgt.xml connection string:
<Property name="ConnectionURL">${user.mgt.connection.url}</Property>
I could do -Duser.mgt.connection.url="connection-string" \ but I have about 20 properties that I currently want to set this way and would prefer to keep them all in one file instead of in line environment variables. I found this Medium article
describing something like what I am looking for but I'm not sure it's exactly what I want and it was unclear how to implement this.
Do I need to write a Java Util class to read these environment variables from the properties file or is there a simpler way to do this? And if I need a utils class what would that look like?
As far as I know, this feature will support from the WSO2 products which are based on Carbon Kernek 5 and onwards. But at the moment most of the WSO2 products are based on kernel version 4.0. Therefore I think you can't get this done with existing WSO2 products.

Spring cloud config properties not honouring config properties

I wish to use consul strictly as a config source.
I am using spring-cloud-consul-config to get my config.
I am using git2consul to load files into consul and read them.
As per the spring cloud documentation I have added the following to my build.gradle
compile ("org.springframework.cloud:spring-cloud-starter-consul-config")
and have the following in my application.properties
spring.application.name=test-service
spring.cloud.consul.config.enabled=true
spring.cloud.consul.enabled=true
spring.cloud.consul.config.format=FILES
The problem I am facing is that the expected properties are not being loaded into the ConfigurationProperties beans. On further debugging in the ConsulPropertySourceLocator::locate(Environment environment) method, I see that the this.properties object is still loaded with KEY_VALUE enum.
This led me to ConsulConfigBootstrapConfiguration class, where the ConsulConfigProperties bean is being instantiated using a constructor.
Is this the problem or do I have something wrong in my setup.
If someone has a working setup of git2consul and spring cloud config, please can you point me to it for reference.
These values that you have in application.properties
spring.application.name=test-service
spring.cloud.consul.config.enabled=true
spring.cloud.consul.enabled=true
spring.cloud.consul.config.format=FILES
need to be in bootstrap.properties.

Ignore an log4net Error in powershell

I have an issue on the script, basically I don't use any log4net or whatever and im not planning, but some resource which i access during my script i suppose has some references to this log4net, so i get this messages:
log4net:ERROR XmlConfigurator: Failed to find configuration section
'log4net' in the application's .config file. Check your .config file
for the and elements. The configuration
section should look like:
I don't really care about this, as this is also not a real error, i would prefere to somehow hide this messages from the propmpt window, is this possible?
How can I ignore this information, without too much hassle?
This message comes from the log4net internal debugging, and means that not log4net configuration information is found in the config file. What I find strange is that this kind of info is usually opt-in:
There are 2 different ways to enable internal debugging in log4net.
These are listed below. The preferred method is to specify the
log4net.Internal.Debug option in the application's config file.
Internal debugging can also be enabled by setting a value in the application's configuration file (not the log4net configuration file,
unless the log4net config data is embedded in the application's config
file). The log4net.Internal.Debug application setting must be set to
the value true. For example:
This setting is read immediately on startup an will cause all internal debugging messages to be emitted.
To enable log4net's internal debug programmatically you need to set the log4net.Util.LogLog.InternalDebugging property to true.
Obviously the sooner this is set the more debug will be produced.
So either the code of one component uses the code approach, or there is a configuration value set to true. Your options are:
look through the configuration files for a reference to the log4net.Internal.Debug config key; if you find one set to true, set it to false.
add an empty log4net section in the configuration file to satisfy the configurator and prevent it from complaining
if the internal debugging is set through code, you may be able to redirect console out and the trace appenders (see link for where the internal debugging writes to) but this really depends on your environment so you'll need to dig a bit more to find how to catch all outputs. Not really simple