Different routes for prod and dev in play 2.0 - deployment

My Play 2.0 application runs under different directories during development and in production:
During dev we use /, in production it runs as /crm/.
Is it possible to define a "root directory" of some sort for play?
This article suggests using the isDev() sort of methods and this one to use a config variable, but it seems like the routes file no longer allows code inclusion: adding %{ }—style tags to the routes file results in compilation errors.

In 2.0 or 2.0.1 you can't do it.
If you use the trunk-version you can define a property:
application.context="/AwesomePlayApplication"
This property can be set by in the usual way at production.
But this is only possible with the future version.

As there seems to be no other solution, I decided to go with a shell script that modifies the routes file on deployment and adds the necessary prefix to every route.

Related

how to disclude development.conf from docker image creation of play framework application artifact

Using scala playframework 2.5,
I build the app into a jar using sbt plugin PlayScala,
And then build and pushes a docker image out of it using sbt plugin DockerPlugin
Residing in the source code repository conf/development.conf (same where application.conf is).
The last line in application.conf says include development which means that in case development.conf exists, the entries inside of it will override some of the entries in application.conf in such way that provides all default values necessary for making the application runnable locally right out of the box after the source was cloned from source control with zero extra configuration. This technique allows every new developer to slip right in a working application without wasting time on configuration.
The only missing piece to make that architectural design complete is finding a way to exclude development.conf from the final runtime of the app - otherwise this overrides leak into production runtime and obviously the application fails to run.
That can be achieved in various different ways.
One way could be to some how inject logic into the build task (provided as part of the sbt pluging PlayScala I assume) to exclude the file from the jar artifact.
Other way could be injecting logic into the docker image creation process. this logic could manually delete development.conf from the existing jar prior to executing it (assuming that's possible)
If you ever implemented one of the ideas offered,
or maybe some different architectural approach that gives the same "works out of the box" feature, please be kind enough to share :)
I usually have the inverse logic:
I use the application.conf file (that Play uses by default) with all the things needed to run locally. I then have a production.conf file that starts by including the application.conf, and then overrides the necessary stuff.
for deploying to production (or staging) I specify the production/staging.conf file to be used
This is how I solved it eventually.
conf/application.conf is production ready configuration, it contains placeholders for environment variables whom values will be injected in runtime by k8s given the service's deployment.yaml file.
right next to it, conf/development.conf - its first line is include application.conf and the rest of it are overrides which will make the application run out of the box right after git clone by a simple sbt run
What makes the above work, is the addition of the following to build.sbt :
PlayKeys.devSettings := Seq(
"config.resource" -> "development.conf"
)
Works like a charm :)
This can be done via the mappings config key of sbt-native-packager:
mappings in Universal ~= (_.filterNot(_._1.name == "development.conf"))
See here.

How to parameterize Bamboo builds?

Please note, although my specific example here involves Java/Grails, it really applies to any type of task available in Bamboo.
I have a task that is a part of a Bamboo build where I run a Java/Grails app like so:
grails run-app -Dgrails.env=<ENV>
Where "<ENV>" can be one of several values (dev, prod, staging, etc.). It would be nice to "parameterize" the plan so that, sometimes, it runs like so:
grails run-app -Dgrails.env=dev
And other times, it runs like so:
grails run-app -Dgrails.env=staging
etc. Is this possible, if so, how? And does the REST API allow me to specify parameter info so I can kick off different-parameterized builds using cURL or wget?
This seems to be a work around but I believe it can help resolve your issue. Atlassian has a free plugin call Bamboo Inject Variables Plugin. Basically, with this plugin, you can create an "Inject Bamboo Variables from file" task to read a variable from a file.
So the idea here is to have your script set the variable to a specific file then kick off the build; the build itself will read that variable from the file and use it in the grails task.
UPDATE
After a search, I found that you can use REST API to change plan variables (NOT global). This would make your task simpler: just define a plan variable (in Plan Configuration -> tab Variables) then change it every time you need to. The information on how to change is available at Bamboo Knowledge Base

How to keep separate dev, test, and prod databases in Play! 2 Framework?

In particular, for test-cases, I want to keep the test database separate so that the test cases don't interfere with development or production databases.
What are some good practices for separating development, test and production environments?
EDIT1: Some context
In Ruby On Rails, there are different configuration files by convention for different environments. So does Play! 2 also support that ?
Or, do I have to cook the configuration files, and then write some glue code that selects the appropriate configuration files ?
At the moment if I run sbt test it uses development database ( configured as "default" in conf/application.conf ). However I would like Play!2 to use a different test database.
EDIT2: On commands that play provides
For Play! 2 framework, I observed this.
$ help play
Welcome to Play 2.2.2!
These commands are available:
-----------------------------
...OUTPUT SKIPPED...
run <port> Run the current application in DEV mode.
test Run Junit tests and/or Specs from the command line
start <port> Start the current application in another JVM in PROD mode.
...OUTPUT SKIPPED...
There are three well defined commands for "test", "development" and "production" instances which are:
test: This runs the test cases. So it should automatically select test configuration.
run <port>: this runs the development instance on the specified port. So this command should automatically select development configuration.
start <port>: this runs the production instance on the specified port. So this should automatically select production configuration.
However, all these commands select the values that are provided in conf/application.conf. I feel there is some gap to be filled here.
Please do correct me if I am wrong.
EDIT3: Best approach is using Global.scala
Described here: How to manage application.conf in several environments with play 2.0?
Good practice is keeping separate instances of the application in separate folders and synching them i.e. via git repo.
When you want to keep single instance you can use alternative configuration file for each environment.
In your application.conf file there is an entry (or entries) for your database, e.g. db.default.url=jdbc:mysql://127.0.0.1:3306/devdb
The conf file can read environment variables using ${?ENV_VAR_NAME} syntax, so change that to something like db.default.url=${?DB_URL} and use environment variables.
A simpler way to get this done and manage your configuration easier is via GlobalSettings. There is a method you can override and that its name is "onLoadConfig". Try check its api at API_LINK
Basically on your conf/ project folder, you'll setup similar to below:
conf/application.conf --> configurations common for all environment
conf/dev/application.conf --> configurations for development environment
conf/test/application.conf --> configurations for testing environment
conf/prod/application.conf --> configurations for production environment
So with this, your application knows which configuration to run for your specific environment mode. For a code snippet of my implementation on onLoadConfig try check my article at my blog post
I hope this is helpful.

Passing RAILS_ENV into Torquebox without using a Deployment Descriptor

I am wondering if there is a way to pass a value for RAILS_ENV directly into the Torquebox server without going through a deployment descriptor; similar to how I can pass properties into Java with the -D option.
I have been wrestling with various deployment issues with Torquebox over the past couple weeks. I think a large part of the problem has to do with packaging the gems into the Knob file, which is the most practical way for managing them on a Window environment. I have tried archive deployment and expanded deployment; with and without external deployment descriptor.
With an external deployment descriptor, I found the packaged Gem dependencies were not properly deployed and I received errors about missing dependencies.
When expanded, I had to fudge around a lot with the dependencies and what got included in the Knob, but eventually I got it to deploy. However, certain files in the expanded Knob were marked as failed (possible duplicate dependencies?), but they did not affect the overall deployment. The problem was when the server restarted, deployment would fail the second time mentioning it could not redeploy one of the previously failed files.
The only one I have found to work consistently for me is archive without external deployment descriptor. However, I still need a way to tell the application in which environment it is running. I have different Torquebox instances for each environment and they only run the one application, so it would be fairly reasonable to configure this at the server level.
Any assistance in this matter would be greatly appreciated. Thank you very much!
The solution I finally came to was to pass in RAILS_ENV as a Java property to the Torquebox server and then to set ENV['RAILS_ENV'] to this value in the Rails boot.rb initializer.
Step 1: Set Java Property
First, you will need to set a Rails Environment java property for your Torquebox server. To keep with standard Java conventions, I called this rails.env.
Dependent on your platform and configuration, this change will need to be made in one of the following scripts:
Using JBoss Windows Service Wrapper: service.bat
Standalone environment: standalone.conf.bat (Windows) or standalone.conf (Unix)
Domain environment:: domain.conf.bat (Windows) or domain.conf (Unix)
Add the following line to the appropriate file above to set this Java property:
set JAVA_OPTS=%JAVA_OPTS% -Drails.env=staging
The -D option is used for setting Java system properties.
Step 2: Set ENV['RAILS_ENV'] based on Java Property
We want to set the RAILS_ENV as early as possible, since it is used by a lot of Rails initialization logic. Our first opportunity to inject application logic into the Rails Initialization Process is boot.rb.
See: http://guides.rubyonrails.org/initialization.html#config-boot-rb
The following line should be added to the top of boot.rb:
# boot.rb (top of the file)
ENV['RAILS_ENV'] = ENV_JAVA['rails.env'] if defined?(ENV_JAVA) && ENV_JAVA['rails.env']
This needs to be the first thing in the file, so Bundler can make intelligent decisions about the environment.
As you can see above, a seldom mentioned feature of JRuby is that it conveniently exposes all Java system properties via the ENV_JAVA global map (mirroring the ENV ruby map), so we can use it to access our Java system property.
We check that ENV_JAVA is defined (i.e. JRuby is being used), since we support multiple deployment environments.
I force the rails.env property to be used when present, as it appears that *RAILS_ENV* already has a default value at this point.

How to add application.conf files for akka multi-jvm applications

I have created a set of applications in akka in the multi-jvm. Following all the conventions on the docs http://doc.akka.io/docs/akka/snapshot/dev/multi-jvm-testing.html I can run them using multi-jvm:run {application name}.
This behaves perfectly but the applications now require remote akka features. To do this I need to change settings in application.conf as mentioned here: http://doc.akka.io/docs/akka/snapshot/scala/remoting.html
My problem is that I do not now how to give each of these multi-jvm test applications an application.conf file of their own. I'm not sure if their is a file-system based convention, or it has to be done in code. Either solution would solve the problem in theory.
I suggest that you define the configuration in code in the test using ConfigFactory.parseString and use that in MultiNodeConfig.commonConfig.
Note that you can use define specific config per node with MultiNodeConfig.nodeConfig.
The application.conf's are loaded from the class path, but you can override them with system properties as described here: https://github.com/typesafehub/config#standard-behavior
So for each JVM, you can specify the necessary System properties(which is kind of ugly), or you can drop an application.conf in the classpath of each JVM that overrides the reference.conf's in the libraries.