Two Configuration files in Scala-Spray framework - scala

I have REST API, that is developed using Scala and Spray framework. I am able to execute and launch my Api from localhost. The API is connected to the database. The IP Address(localhost) and port of Database is read from the "application.conf" file under the resources.
Everything works fine till I start using Docker. In Docker I have :
1. One Docker container of Rest API
2. One Docker container of Database.
The IP address of Database changes for each docker instance, therefore I need to update my "application.conf" file. Although I can use the hostname of Db instance that remains the same.
My issue is : Can I have two "application.conf" files , one for localhost and one for Docker instance? IS there a way to change the "application.conf" file at the run time.
P.s I am using "sbt run" to run the application and as per documentation it does not support java system properties or environment variables

Yes, you can choose the config at runtime. spray & akka use the typesafe config library which allows setting single settings or the whole configuration using JVM properties.
From the documentation of config:
For applications using application.{conf,json,properties}, system
properties can be used to force a different config source:
config.resource specifies a resource name - not a basename, i.e. application.conf not application
config.file specifies a filesystem path, again it should include the extension, not be a basename
config.url specifies a URL
These system properties specify a replacement for
application.{conf,json,properties}, not an addition. They only
affect apps using the default ConfigFactory.load() configuration. In
the replacement config file, you can use include "application" to
include the original default config file; after the include statement
you could go on to override certain settings.

Related

Using environment variables to configure Docker deployment of Lagom Scala application

We're developing several Lagom-based Scala micro-services. They are configured using variable replacement in application.conf, eg.
mysql = {
url = "jdbc:mysql://"${?ENV_MYSQL_DATABASE_URL}
During development, we set these variables as Java System Properties via a env.sbt file that calls System.setProperty("ENV_MYSQL_DATABASE_URL", url). This is working fine.
Now I want to deploy this in a container to my local Docker installation. We are using the SbtReactiveAppPlugin to build the Docker image from build.sbt and simply run sbt Docker/publishLocal. This works as expected, a Docker image is created and I can fire it up.
However, passing in environment variables using the standard docker or docker-compose mechanisms does not seem to work. While I can see that the environment variables are set correctly inside the Docker container (verified using env on a bash and also by doing log.debug("ENV_MYSQL_DATABASE_URL via env: " + sys.env("ENV_MYSQL_DATABASE_URL")) inside the service), they are not used by the application.conf and not available in the configuration system. The values are empty/unset (verified through configuration.getString("ENV_MYSQL_DATABASE_URL").toString() and the exceptions thrown by the mysql system and other systems).
The only way I've gotten it to work was by fudging this into the JAVA_OPTS via JAVA_OPTS=-D ENV_MYSQL_DATABASE_URL=..... However, this seems like a hack, and doesn't appear to scale very well with dozens of environment parameters.
Am I missing something, is there a way to easily use the environment variables inside the Lagom application and application.conf?
Thanks!
I've used Lightbend config to configure Lagom services via environment variables in docker containers for many years, so know that it can be done and has been pretty straightforward in my experience.
With that in mind, when you say that they're not used by application.conf, do you mean that they're unset? Note that unless you're passing a very specific option as a Java property, configuration.getString("ENV_MYSQL_DATABASE_URL") will not read from an environment variable, so checking that will not tell you anything about whether mysql.url is affected by the environment variable. configuration.getString("mysql.url") will give you a better idea of what's going on.
I suspect that in fact your Docker image is being built with the dev-mode properties hardcoded in, and since Java system properties take precedence over everything else, they're shadowing the environment variable.
You may find it useful to structure your application.conf along these lines:
mysql_database_url = "..." # Some reasonable default default for dev-mode
mysql_database_url = ${?ENV_MYSQL_DATABASE_URL}
mysql {
url = "jdbc://"${mysql_database_url}
}
In this case, you have a reasonable default for a developer (probably including in the docs some instructions for running MySQL in a way compatible with that configuration). The default can then be overridden via setting a Java property (e.g. JAVA_OPTS=-Dmysql_database_url) or by setting the ENV_MYSQL_DATABASE_URL environment variable.
While I agree with the answer provided by Levi Ramsey, I would suggest you to use typesafe's config to load the your config

Creating and using a custom kafka connect configuration provider

I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.

sling run modes use?

what is the use of sling run modes property in sling.properties file?
I have a osgi felix bundle that installed onto aem admin bundle console through aem cq5 package manager.
**
configuration properties of one of the bundle service is not available
unless I put the following line in cq5/config/sling.properties file.
sling.run.modes=author,sandbox why is this so ? what is the importance
of sling.run.modes ?
**
Thank you,
Sri
Run modes allow you to tune your AEM instance for a specific purpose; for example author or publish, test, development, intranet or others.
Exampe: For dev: sling.run.modes=author,dev
use of run mode is, example - i have a config.author.prod and config.author.dev in crxd/e. Based on the run mode instance, the OSGI Bundle will pick the corresponding config.author.dev or prod configuration settings defined in nt:unstructured and start working.
Ref: https://docs.adobe.com/docs/en/cq/5-6-1/deploying/configure_runmodes.html
Ref: https://helpx.adobe.com/experience-manager/kb/RunModeDependentConfigAndInstall.html
Define a respository-based configuration for a single instance
There are two ways to configure CQ5.
Configure the Apache Felix Web Management Console
The configuration on the Apache Felix Web Management Console (http://:/system/console/configMgr) is always specific for the current instance.
You can find a description in the documentation: http://dev.day.com/content/docs/v5_2/html-resources/cq5_guide_system_administrator/ch05s03.html
Repository-based configuration
It is also possible to store configuration in the CRX repository as nodes of nodetype sling:OsgiConfig.
For more information, see http://dev.day.com/content/docs/v5_2/html-resources/cq5_guide_system_administrator/ch05s02.html
With this method, it is possible to share configuration among several instances.
The name of these nodes must be equal to the Persistent Identity (PID) of the configuration (for example, the name of the service). If you look at http://:/system/console/config, you see these names listed as service.pid properties. These configuration nodes have to be child-nodes of nodetype nt:folder with a name starting with config followed with a dot. All the run-modes that the config applies to are also separated with a dot.
Examples: config.author, config.publish, config.author.dev, config.author.foo.dev, and so on.

How can I set http.port in application.conf by using playframework2.4(Scala)

I could set http.port in applicaton.conf by using playframework1.2.7
like this
http.port = 9020
jpda.port = 8020
also jdpa.port.
But in play2.4.
I cannot set http.port in application.conf like this.
I know that I can do like this when I run this project.
activator "run 9020"
But it is too troublesome for me.
If you have some ideas,
please share your idea.
You cannot specify port in aaplication.conf during run mode (but this can be used while deploying).
In run mode the HTTP server part of Play starts before the application has been compiled. This means that the HTTP server cannot access the application.conf file when it starts. If you want to override HTTP server settings while using the run command you cannot use the application.conf file. Instead, you need to either use system properties or the devSettings setting shown above.
Source: https://www.playframework.com/documentation/2.4.x/Configuration#HTTP-server-settings-in-application.conf
Also look at full server configuration options
https://www.playframework.com/documentation/2.4.x/ProductionConfiguration#Server-configuration-options

CherryPy : Accessing Global config

I'm working on a CherryPy application based on what I found on that BitBucket repository.
As in this example, there is two config files, server.cfg (aka "global") and app.cfg.
Both config files are loaded in the serve.py file :
# Update the global settings for the HTTP server and engine
cherrypy.config.update(os.path.join(self.conf_path, "server.cfg"))
# ...
# Our application
from webapp.app import Twiseless
webapp = Twiseless()
# Let's mount the application so that CherryPy can serve it
app = cherrypy.tree.mount(webapp, '/', os.path.join(self.conf_path, "app.cfg"))
Now, I'd like to add the Database configuration.
My first thought was to add it in the server.cfg (is this the best place? or should it be located in app.cfg ?).
But if I add the Database configuration in the server.cfg, I don't know how to access it.
Using :
cherrypy.request.app.config['Database']
Works only if the [Database] parameter is in the app.cfg.
I tried to print cherrypy.request.app.config, and it shows me only the values defined in app.cfg, nothing in server.cfg.
So I have two related question :
Is it best to put the database connection in the server.cfg or app.cfg file
How to access server.cfg configuration (aka global) in my code
Thanks for your help! :)
Put it in the app config. A good question to help you decide where to put such things is, "if I mounted an unrelated blog app at /blogs on the same server, would I want it to share that config?" If so, put it in server config. If not, put it in app config.
Note also that the global config isn't sectioned, so you can't stick a [Database] section in there anyway. Only the app config allows sections. If you wanted to stick database settings in the global config anyway, you'd have to consider config entry names like "database_port" instead. You would then access it directly by that name: cherrypy.config.get("database_port").