I am using ASP.NET Core 5.0 and I have a Web API app deployed to internal cloud where few settings like DB are controlled via environment variables on the host cloud. In my Startup.cs I have the below code
string projectDbConnection = Configuration.GetSection("ProjectDatabaseSettings").GetValue<string>("PROJECT_DB_CONNECTION");
string projectDbName = Configuration.GetSection("ProjectDatabaseSettings").GetValue<string>("PROJECT_DB_NAME");
Here as I understand, when running locally in IIS Express it looks for appsettings.<Environment>.json and they take precedence over appsettings.json values.
But this app is always connecting to the wrong DB when I deployed to Cloud where I mentioned the PROJECT_DB_CONNECTION & PROJECT_DB_NAME as Environment variables for the app.
To make the app read from the Environment variables I had to change the above Code in Startup.cs as
string projectDbConnection = Configuration.GetValue<string>("PROJECT_DB_CONNECTION");
string projectDbName = Configuration.GetValue<string>("PROJECT_DB_NAME");
I am unable to understand the difference between the GetSection.GetValue and just GetValue and why I should use Configuration.GetValue() to direct app to read from Env variables.
what am I missing and when should we use what?
Naming of environment variables
There is kind of a naming convention in the environment variables for nested appsettings to env vars, see naming of environment variables.
Each element in the hierarchy is separated by a double underscore.
In your case it would work if you name the env variable: ProjectDatabaseSettings__PROJECT_DB_CONNECTION.
Config Order
Regarding to Microsoft Documentation there is a order in which the config sources are checked.
ChainedConfigurationProvider : Adds an existing IConfiguration as a source. In the default configuration case, adds the host configuration and setting it as the first source for the app configuration.
appsettings.json using the JSON configuration provider.
appsettings.Environment.json using the JSON configuration provider. For example, appsettings.Production.json and appsettings.Development.json.
App secrets when the app runs in the Development environment.
Environment variables using the Environment Variables configuration provider.
Command-line arguments using the Command-line configuration provider.
The usecase
This is useful when you are developing local using appsettings.json, but run in a cluster or cloud in production where it is more convenient to use environment variables (f.e.: in kubernetes environment variables are set via config maps).
Related
I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.
Codewind has the notation of a Linked Project that lets a person define an environment variable which will hold the host name of the deployed Linked Project.
However, how does a person define and set other environment variables that get will be visible in the local developer Docker runtime deployment? For example, environment variables to hold the user name or password for an external resource like a database.
We're developing several Lagom-based Scala micro-services. They are configured using variable replacement in application.conf, eg.
mysql = {
url = "jdbc:mysql://"${?ENV_MYSQL_DATABASE_URL}
During development, we set these variables as Java System Properties via a env.sbt file that calls System.setProperty("ENV_MYSQL_DATABASE_URL", url). This is working fine.
Now I want to deploy this in a container to my local Docker installation. We are using the SbtReactiveAppPlugin to build the Docker image from build.sbt and simply run sbt Docker/publishLocal. This works as expected, a Docker image is created and I can fire it up.
However, passing in environment variables using the standard docker or docker-compose mechanisms does not seem to work. While I can see that the environment variables are set correctly inside the Docker container (verified using env on a bash and also by doing log.debug("ENV_MYSQL_DATABASE_URL via env: " + sys.env("ENV_MYSQL_DATABASE_URL")) inside the service), they are not used by the application.conf and not available in the configuration system. The values are empty/unset (verified through configuration.getString("ENV_MYSQL_DATABASE_URL").toString() and the exceptions thrown by the mysql system and other systems).
The only way I've gotten it to work was by fudging this into the JAVA_OPTS via JAVA_OPTS=-D ENV_MYSQL_DATABASE_URL=..... However, this seems like a hack, and doesn't appear to scale very well with dozens of environment parameters.
Am I missing something, is there a way to easily use the environment variables inside the Lagom application and application.conf?
Thanks!
I've used Lightbend config to configure Lagom services via environment variables in docker containers for many years, so know that it can be done and has been pretty straightforward in my experience.
With that in mind, when you say that they're not used by application.conf, do you mean that they're unset? Note that unless you're passing a very specific option as a Java property, configuration.getString("ENV_MYSQL_DATABASE_URL") will not read from an environment variable, so checking that will not tell you anything about whether mysql.url is affected by the environment variable. configuration.getString("mysql.url") will give you a better idea of what's going on.
I suspect that in fact your Docker image is being built with the dev-mode properties hardcoded in, and since Java system properties take precedence over everything else, they're shadowing the environment variable.
You may find it useful to structure your application.conf along these lines:
mysql_database_url = "..." # Some reasonable default default for dev-mode
mysql_database_url = ${?ENV_MYSQL_DATABASE_URL}
mysql {
url = "jdbc://"${mysql_database_url}
}
In this case, you have a reasonable default for a developer (probably including in the docs some instructions for running MySQL in a way compatible with that configuration). The default can then be overridden via setting a Java property (e.g. JAVA_OPTS=-Dmysql_database_url) or by setting the ENV_MYSQL_DATABASE_URL environment variable.
While I agree with the answer provided by Levi Ramsey, I would suggest you to use typesafe's config to load the your config
Adding Property in Scala Environment Properties
val sysProps = System.getProperties
sysProps.setProperty("current.date.time", LocalDateTime.now().toString())
i'm able to save this property.
I tried accessing this property(current.date.time) in log4j.properties like below
log4j.appender.file.File=C:/Users/vsami/Desktop/Demo_${current.date.time}.log
log4j.appender.file.File=C:/Users/vsami/Desktop/Demo_${env:current.date.time}.log
Log file is getting generated in above location like Demo_.log, Expected :- Demo_2019/11/27T13:21:00.log
Above implementation is not helping me in accessing variable from environment properties and generate log file with expected naming convention.
JVM has properties that can be passed via -D parameter at VM boot. -Dprop=value.
These properties can be read via System.getProperties API call. See docs for more info.
Environment variables are not specified on JVM boot and managed independently from VM by your boot environment (it can be shell, bash etc). You cannot change environment variables in already running VM. These variables can be read via System.getenv()
$ is a look up operator in log4j and can be used to resolve env variables with env: prefix or Main Arguments Lookup with prefix main:.
You could use main:current.date.time and initialise your value as following
MainMapLookup.setMainArguments(Array("--current.date.time", LocalDateTime.now().toString()));
Make sure that MainMapLookup is called before logging is initialised.
In my play application the database settings are not known before startup of the application. I have to read them from an environment variable after automatic deployment and start of the application.
The platform the app is deployed on is cloudfoundry. And there is a environment variable called VCAP_SERVICES (that is a json string). Here are all services listed e.g. the database service including the credentials
Is there a prefered way to do so? In means of still being able to use stuff like:
DataSource ds = DB.getDatasource();