Pass custom environment variables to cloud task while launching from Spring cloud dataflow server - spring-cloud

I'm trying to launch a cloud Task from Spring Cloud DataFlow Server and would need to pass some custom environment variables to the application. How can we pass the values during manual launch and thru task scheduler?
I'm able to set java_opts via param deployer.timestamp-task.cloudfoundry.javaOpts. I tried to set variables var1 & var2 as below but didn't work
deployer.timestamp-task.cloudfoundry.env.var1
deployer.timestamp-task.cloudfoundry.env.var2
deployer.timestamp-task.cloudfoundry.env={"var1":"value1", "var2":"value2"}

The properties in the following format are expected to work:
deployer.timestamp-task.cloudfoundry.env.var1=value1
deployer.timestamp-task.cloudfoundry.env.var2=value2
If this doesn't work, then please submit a bug report here with the exact version of your SCDF etc.,

Related

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

How Environment variable names reflect the structure of an appsettings.json

I am using ASP.NET Core 5.0 and I have a Web API app deployed to internal cloud where few settings like DB are controlled via environment variables on the host cloud. In my Startup.cs I have the below code
string projectDbConnection = Configuration.GetSection("ProjectDatabaseSettings").GetValue<string>("PROJECT_DB_CONNECTION");
string projectDbName = Configuration.GetSection("ProjectDatabaseSettings").GetValue<string>("PROJECT_DB_NAME");
Here as I understand, when running locally in IIS Express it looks for appsettings.<Environment>.json and they take precedence over appsettings.json values.
But this app is always connecting to the wrong DB when I deployed to Cloud where I mentioned the PROJECT_DB_CONNECTION & PROJECT_DB_NAME as Environment variables for the app.
To make the app read from the Environment variables I had to change the above Code in Startup.cs as
string projectDbConnection = Configuration.GetValue<string>("PROJECT_DB_CONNECTION");
string projectDbName = Configuration.GetValue<string>("PROJECT_DB_NAME");
I am unable to understand the difference between the GetSection.GetValue and just GetValue and why I should use Configuration.GetValue() to direct app to read from Env variables.
what am I missing and when should we use what?
Naming of environment variables
There is kind of a naming convention in the environment variables for nested appsettings to env vars, see naming of environment variables.
Each element in the hierarchy is separated by a double underscore.
In your case it would work if you name the env variable: ProjectDatabaseSettings__PROJECT_DB_CONNECTION.
Config Order
Regarding to Microsoft Documentation there is a order in which the config sources are checked.
ChainedConfigurationProvider : Adds an existing IConfiguration as a source. In the default configuration case, adds the host configuration and setting it as the first source for the app configuration.
appsettings.json using the JSON configuration provider.
appsettings.Environment.json using the JSON configuration provider. For example, appsettings.Production.json and appsettings.Development.json.
App secrets when the app runs in the Development environment.
Environment variables using the Environment Variables configuration provider.
Command-line arguments using the Command-line configuration provider.
The usecase
This is useful when you are developing local using appsettings.json, but run in a cluster or cloud in production where it is more convenient to use environment variables (f.e.: in kubernetes environment variables are set via config maps).

Azure Data Factory not Using Data Flow Runtime

I have an Azure Data Factory with a pipeline that I'm using to pick up data from an on-premise database and copy to CosmosDB in the cloud. I'm using a data flow step at the end to delete documents that don't exist in the source from the sink.
I have 3 integration runtimes set up:
AutoResolveIntegrationRuntime (default set up by Azure)
Self hosted integration runtime (I set this up to connect to the on-premise database so it's used by the source dataset)
Data flow integration runtime (I set this up to be used by the data flow step with a TTL setting)
The issue I'm seeing is when I trigger the pipeline the AutoResolveIntegrationRuntime is the one being used so I'm not getting the optimisation that I need from the Data flow integration runtime with the TTL.
Any thoughts on what might be going wrong here?
Per my experience, only the AutoResolveIntegrationRuntime (default set up by Azure) supports the optimization:
When we choose the data flow run on non-default integration, there isn't the optimization:
And once the integration runtime created, we also couldn't change the settings:
Data Factory documents didn't talk more about this. When I run the pipeline, I found that the dataflowruntime won't work:
That means that no matter which integration runtime you used to connect to the dataset, data low will always use the Azure Default integration runtime.
SHIR doesnt support dataflow execution.

Using environment variables to configure Docker deployment of Lagom Scala application

We're developing several Lagom-based Scala micro-services. They are configured using variable replacement in application.conf, eg.
mysql = {
url = "jdbc:mysql://"${?ENV_MYSQL_DATABASE_URL}
During development, we set these variables as Java System Properties via a env.sbt file that calls System.setProperty("ENV_MYSQL_DATABASE_URL", url). This is working fine.
Now I want to deploy this in a container to my local Docker installation. We are using the SbtReactiveAppPlugin to build the Docker image from build.sbt and simply run sbt Docker/publishLocal. This works as expected, a Docker image is created and I can fire it up.
However, passing in environment variables using the standard docker or docker-compose mechanisms does not seem to work. While I can see that the environment variables are set correctly inside the Docker container (verified using env on a bash and also by doing log.debug("ENV_MYSQL_DATABASE_URL via env: " + sys.env("ENV_MYSQL_DATABASE_URL")) inside the service), they are not used by the application.conf and not available in the configuration system. The values are empty/unset (verified through configuration.getString("ENV_MYSQL_DATABASE_URL").toString() and the exceptions thrown by the mysql system and other systems).
The only way I've gotten it to work was by fudging this into the JAVA_OPTS via JAVA_OPTS=-D ENV_MYSQL_DATABASE_URL=..... However, this seems like a hack, and doesn't appear to scale very well with dozens of environment parameters.
Am I missing something, is there a way to easily use the environment variables inside the Lagom application and application.conf?
Thanks!
I've used Lightbend config to configure Lagom services via environment variables in docker containers for many years, so know that it can be done and has been pretty straightforward in my experience.
With that in mind, when you say that they're not used by application.conf, do you mean that they're unset? Note that unless you're passing a very specific option as a Java property, configuration.getString("ENV_MYSQL_DATABASE_URL") will not read from an environment variable, so checking that will not tell you anything about whether mysql.url is affected by the environment variable. configuration.getString("mysql.url") will give you a better idea of what's going on.
I suspect that in fact your Docker image is being built with the dev-mode properties hardcoded in, and since Java system properties take precedence over everything else, they're shadowing the environment variable.
You may find it useful to structure your application.conf along these lines:
mysql_database_url = "..." # Some reasonable default default for dev-mode
mysql_database_url = ${?ENV_MYSQL_DATABASE_URL}
mysql {
url = "jdbc://"${mysql_database_url}
}
In this case, you have a reasonable default for a developer (probably including in the docs some instructions for running MySQL in a way compatible with that configuration). The default can then be overridden via setting a Java property (e.g. JAVA_OPTS=-Dmysql_database_url) or by setting the ENV_MYSQL_DATABASE_URL environment variable.
While I agree with the answer provided by Levi Ramsey, I would suggest you to use typesafe's config to load the your config

Azure powershell slot specific appsetting

I'm trying to create a script to set / update appsettings in an Azure web app slot using powershell. Using the examples in Adding an App Settings to existing Azure Web Application using Azure Power Shell it works.
My problem is that I want "Slot setting" to be true. In all the examples I've found and in resources.azure.com, the settings are always name/value pairs, with no property to specify the value as "Slot setting".
Is this even possible to script?
Thanks.
Yes when using the Set-AzureWebsite command you can do
Set-AzureWebsite -SlotStickyAppSettingNames #("setting name")