change config/newrelic.yml path - capistrano

I'm using the new relic ruby agent with symfony 2.1 and capifony.
I'd like to be able to tell the new relic agent that the config file lives under app/config/newrelic.yml instead of config/newrelic.yml
Is this possible?

You have to set NRCONFIG environment variable and you can do it directly in cap call this way:
cap newrelic:notice_deployment NRCONFIG=app/config/newrelic.yml
I would like to set it in deployment configuration and according to cap documentation this should work:
set :default_environment, {
"NRCONFIG" => "app/config/newrelic.yml"
}
but it doesn't. ;o(

Related

Access agent hostname for a build variable

I've got release pipelines defined that have worked. I've got a config transform that will write a API url to a config file (currently with a hardcoded api url).
What I'd like to do is be able to have the config be re-written based on the agent its being deployed on.
eg. if the machine being deployed to is TEST-1, I'd like to write https://TEST-1.somedomain.com/api into a config using that transform step.
The .somedomain.com/api can be static.
I've tried modifying the pipeline variable's value to be https://${{Environment.Name}}.somedomain.com/api, but it just replaces the API_URL in the config with that literal string (does not populate machine name in that variable).
Being that variables are the source of value that is being written to configs during the transform, I'm struggling to see another way to do this.
some gotchas
Using non yaml pipeline definitions (I know I saw people put logic in variable definitions within yaml pipelines)
Can't just use localhost, as the configuration is being read into a javascript rich app that would have js trying to connect to localhost vs trying to connect to the server.
I'm interested in any ways I could solve this problem
${{Environment.Name}} is not valid syntax for either YAML or classic pipelines.
In classic pipelines it would be $(Environment.Name).
In YAML, $(Environment.Name) or ${{ variables['Environment.Name'] }} would work.

ddev WSL2 Magento 2 multistore set $MAGE_RUN_CODE & $MAGE_RUN_TYPE

Anyone know how can set the $MAGE_RUN_CODE & $MAGE_RUN_TYPE variables for a Magento 2 multi site using ddev-local?
I have added the new domain to the additional_hostnames variable in .ddev/config.yaml - but now I need to tell the nginx docker container to serve the magento store front when that newstorefront.ddev.site domain is requested.
Any ideas? Thanks.
I have got this working by overriding the .ddev/nginx_full/nginx-site.conf and adding the following lines towards the top of the server block (just before where it defines if ($mage_run_code = '') {:
if ($http_host = mysecondsite.ddev.site){
set $mage_run_code 'mysecondsite_en';
set $mage_run_type store;
}
Now it all works correctly.
Thanks
In ddev you can set environment variables globally or locally in the config, see https://ddev.readthedocs.io/en/stable/users/extend/customization-extendibility/#providing-custom-environment-variables-to-a-container
Basic idea: add this to your .ddev/config.yaml:
web_environment:
- MAGE_RUN_CODE =someval
- MAGE_RUN_TYPE =someotherval

Using environment variables to configure Docker deployment of Lagom Scala application

We're developing several Lagom-based Scala micro-services. They are configured using variable replacement in application.conf, eg.
mysql = {
url = "jdbc:mysql://"${?ENV_MYSQL_DATABASE_URL}
During development, we set these variables as Java System Properties via a env.sbt file that calls System.setProperty("ENV_MYSQL_DATABASE_URL", url). This is working fine.
Now I want to deploy this in a container to my local Docker installation. We are using the SbtReactiveAppPlugin to build the Docker image from build.sbt and simply run sbt Docker/publishLocal. This works as expected, a Docker image is created and I can fire it up.
However, passing in environment variables using the standard docker or docker-compose mechanisms does not seem to work. While I can see that the environment variables are set correctly inside the Docker container (verified using env on a bash and also by doing log.debug("ENV_MYSQL_DATABASE_URL via env: " + sys.env("ENV_MYSQL_DATABASE_URL")) inside the service), they are not used by the application.conf and not available in the configuration system. The values are empty/unset (verified through configuration.getString("ENV_MYSQL_DATABASE_URL").toString() and the exceptions thrown by the mysql system and other systems).
The only way I've gotten it to work was by fudging this into the JAVA_OPTS via JAVA_OPTS=-D ENV_MYSQL_DATABASE_URL=..... However, this seems like a hack, and doesn't appear to scale very well with dozens of environment parameters.
Am I missing something, is there a way to easily use the environment variables inside the Lagom application and application.conf?
Thanks!
I've used Lightbend config to configure Lagom services via environment variables in docker containers for many years, so know that it can be done and has been pretty straightforward in my experience.
With that in mind, when you say that they're not used by application.conf, do you mean that they're unset? Note that unless you're passing a very specific option as a Java property, configuration.getString("ENV_MYSQL_DATABASE_URL") will not read from an environment variable, so checking that will not tell you anything about whether mysql.url is affected by the environment variable. configuration.getString("mysql.url") will give you a better idea of what's going on.
I suspect that in fact your Docker image is being built with the dev-mode properties hardcoded in, and since Java system properties take precedence over everything else, they're shadowing the environment variable.
You may find it useful to structure your application.conf along these lines:
mysql_database_url = "..." # Some reasonable default default for dev-mode
mysql_database_url = ${?ENV_MYSQL_DATABASE_URL}
mysql {
url = "jdbc://"${mysql_database_url}
}
In this case, you have a reasonable default for a developer (probably including in the docs some instructions for running MySQL in a way compatible with that configuration). The default can then be overridden via setting a Java property (e.g. JAVA_OPTS=-Dmysql_database_url) or by setting the ENV_MYSQL_DATABASE_URL environment variable.
While I agree with the answer provided by Levi Ramsey, I would suggest you to use typesafe's config to load the your config

Azure Batch default application package version

According to the Microsoft Documentation, in Azure Batch without the version specified the default version of an application will be deployed.
In my Azure Batch Account I have uploaded an application "MyApp" and set the default version, lets say version "1.0".
When I create a new Pool (in .NET) if I set the ApplicationPackageReferences omitting the version, i.e:
myCloudPool.ApplicationPackageReferences = new List<ApplicationPackageReference>
{
new ApplicationPackageReference
{
ApplicationId = "MyApp"
}
};
The nodes will get the status "Unusable".
If I do the same, but at task level, then the default application is deployed successfully to the node.
Why is that?
Thank you Olandese, I came across this specific case only in the case for default version at pool level and the fix is under its way, I will keep you posted when it will get released.
In the mean time there are few options or at pool level you can do so by using the version parameter.
Task level packages with default, which you are already mentioned above.
Use of the version parameter to pass the default version.
`
Sample:
new List< ApplicationPackageReference >
{
new ApplicationPackageReference(appID, version: appVersion),
},
`
Thanks for your patience, apologies for the inconvenience, will update you once the release is out.
Further addition: If you have faster changing default version
I would do it this way: (With appId as constant and the version as the dynamic version)
By doing this your code pretty much guarantees that node will go and grab the specified version mentioned if its there. Hope this helps.
`
new List<ApplicationPackageReference>
{
new ApplicationPackageReference(appID, version: appVersion),
},

Capistrano in Open Source Projects and different environments

I consider to use Capistrano to deploy my rails app on my server. Currently I'm using a script, which does all the work for me. But Capistrano looks pretty nice and I want to give it a try.
My first problem/question now is: How to use Capistrano properly in open source projects? I don't want to publish my deploy.rb for several reasons:
It contains sensible informations about my server. I don't want to publish them :)
It contains the config for MY server. For other people, which deploy that open source project to their own server, the configuration may differ. So it's pretty senseless to publish my configuration, because it's useless for other people.
Second problem/question: How do I manage different environments?
Background: On my server I provide two different environments for my application: The stable system using the current stable release branch and located under www.domain.com. And a integration environment for the develop team under dev.domain.com running the master branch.
How do I tell Capistrano to deploy the stable system or the dev system?
The way I handle sensitive information (passwords etc.) in Capistrano is the same way I handle them in general: I use an APP_CONFIG hash that comes from a YAML file that isn't checked into version control. This is a classic technique that's covered e.g. in RailsCast #226, or see this StackOverflow question.
There are a few things you have to do a little differently when using this approach with Capistrano:
Normally APP_CONFIG is loaded from your config/application.rb (so it happens early enough to be usable everywhere else); but Capistrano cap tasks won't load that file. But you can just load it from config/deploy.rb too; here's the top of a contrived config/deploy.rb file using an HTTP repository that requires a username/password.
require 'bundler/capistrano'
APP_CONFIG = YAML.load_file("config/app_config.yml")
set :repo_user, APP_CONFIG['repo_user']
set :repo_password, APP_CONFIG['repo_password']
set :repository, "http://#{repo_user}:#{repo_password}#hostname/repositoryname.git/"
set :scm, :git
# ...
The config/app_config.yml file is not checked into version control (put that path in your .gitignore or similar); I normally check in a config/app_config.yml.sample that shows the parameters that need to be configured:
repo_user: 'usernamehere'
repo_password: 'passwordhere'
If you're using the APP_CONFIG for your application, it probably needs to have different values on your different deploy hosts. So have your Capistrano setup make a symlink from the shared/ directory to each release after it's checked out. You want to do this early in the deploy process, because applying migrations might need a database password. So in your config/deploy.rb put this:
after 'deploy:update_code', 'deploy:symlink_app_config'
namespace :deploy do
desc "Symlinks the app_config.yml"
task :symlink_app_config, :roles => [:web, :app, :db] do
run "ln -nfs #{deploy_to}/shared/config/app_config.yml #{release_path}/config/app_config.yml"
end
end
Now, for the second part of your question (about deploying to multiple hosts), you should configure separate Capistrano "stages" for each host. You put everything that's common across all stages in your config/deploy.rb file, and then you put everything that's unique to each stage into config/deploy/[stagename].rb files. You'll have a section in config/deploy.rb that defines the stages:
# Capistrano settings
require 'bundler/capistrano'
require 'capistrano/ext/multistage'
set :stages, %w(preproduction production)
set :default_stage, 'preproduction'
(You can call the stages whatever you want; the Capistrano stage name is separate from the Rails environment name, so the stage doesn't have to be called "production".) Now when you use the cap command, insert the stage name between cap and the target name, e.g.:
$ cap preproduction deploy #deploys to the 'preproduction' environment
$ cap production deploy #deploys to the 'production' environment
$ cap deploy #deploys to whatever you defined as the default