I am building a web application with Play 2.2 for Scala. I have one issue with the auto-reload feature:
I am adding some settings when running the server like this play "run -Dtwitter.consumerSecret=mykey -Dtwitter.tokenSecret=mysecret". When the application re-compile on change, it does not take into account the parameters. I have to re-start the server. How to tell sbt to take into account the settings on reload.
Thanks for you help.
We add the following line to the bottom our application.conf
include "overrides.conf"
We make sure it's in the ignore file of our version control system.
The overrides.conf file allows us to tweak (and add) settings that should remain local.
Note that the include statement is ignored if the file can not be found.
Related
Using scala playframework 2.5,
I build the app into a jar using sbt plugin PlayScala,
And then build and pushes a docker image out of it using sbt plugin DockerPlugin
Residing in the source code repository conf/development.conf (same where application.conf is).
The last line in application.conf says include development which means that in case development.conf exists, the entries inside of it will override some of the entries in application.conf in such way that provides all default values necessary for making the application runnable locally right out of the box after the source was cloned from source control with zero extra configuration. This technique allows every new developer to slip right in a working application without wasting time on configuration.
The only missing piece to make that architectural design complete is finding a way to exclude development.conf from the final runtime of the app - otherwise this overrides leak into production runtime and obviously the application fails to run.
That can be achieved in various different ways.
One way could be to some how inject logic into the build task (provided as part of the sbt pluging PlayScala I assume) to exclude the file from the jar artifact.
Other way could be injecting logic into the docker image creation process. this logic could manually delete development.conf from the existing jar prior to executing it (assuming that's possible)
If you ever implemented one of the ideas offered,
or maybe some different architectural approach that gives the same "works out of the box" feature, please be kind enough to share :)
I usually have the inverse logic:
I use the application.conf file (that Play uses by default) with all the things needed to run locally. I then have a production.conf file that starts by including the application.conf, and then overrides the necessary stuff.
for deploying to production (or staging) I specify the production/staging.conf file to be used
This is how I solved it eventually.
conf/application.conf is production ready configuration, it contains placeholders for environment variables whom values will be injected in runtime by k8s given the service's deployment.yaml file.
right next to it, conf/development.conf - its first line is include application.conf and the rest of it are overrides which will make the application run out of the box right after git clone by a simple sbt run
What makes the above work, is the addition of the following to build.sbt :
PlayKeys.devSettings := Seq(
"config.resource" -> "development.conf"
)
Works like a charm :)
This can be done via the mappings config key of sbt-native-packager:
mappings in Universal ~= (_.filterNot(_._1.name == "development.conf"))
See here.
I have recently started developing an Eclipse plugin (which is basic stuff for now) and I am struggling with "default" way to run Eclipse plugin ("Run as Eclipse application").
The Eclipse is starting another instance with my plugin already installed in it (this is default behaviour).
The problem is that when I want to re-run my plugin project and I press "run" button again (or Ctrl + F11) (and the another Eclipse instance still running) I get following message:
"Could not launch the application because the associated workspace is currently in use by another Eclipse application".
The error makes sense, and when I close "testing" Eclipse instance I am able to run my plugin again.
The question is - "is it normal routine for plugin development?". Maybe I am missing something, e.g. special arguments for Eclipse?
This seems all pretty normal. The error message is since the run configuration is specifing a workspace and when you start a second instance using the same workspace it is locked and considered in use.
What I usually do when testing a plugin is to create a run configuration (click "Run...") where I disable all the plugins I wont need when testing. This makes sure that the test starts up a couple of seconds quicker. Make sure you save that run configuration as a *.launch file aswell, that makes it quicker to test the next time. Or it can be used to share the configuration.
There's a lot you can configure in the run configuration, such as eclipse arguments, vm argument, if you want environment variables set, etc. So be sure to experiment a little.
In your run configuration. Main tab->Workspace Data ->Location text box add this:
${workspace_loc}/../runtime-EclipseApplication${current_date:yyyyMMdd_HHmmss}
Note the suffix ${current_date:yyyyMMdd_HHmmss} by this every time you launch your application new workspace will be created. So you will not get any error message saying workspace is locked.
But be careful as the folder .metadata will be different for different instances as their work-spaces are different. Thus preferences stored/retrieved by different instances are NOT in sync.
You are probably missing one important point: Eclipse supports the Java hot code replacement. Therefore in many cases you can modify your Java code while your application Eclipse instance is running, save the code and continue without restarting.
If hot code replacement is not possible, Eclipse will tell you, so you always know whether the editing changes are applied to the running instance.
This works best with more recent versions of the JVM, so consider upgrading to the latest Java 7 version, even if you write code to be compliant with Java 1.5 or 6.
If you are using Eclipse and your development server is running in the debugger, when you save your changes to this file, Eclipse compiles the new code automatically, then attempts to insert the new code into the already-running server. Changes to classes, JSPs, static files and appengine-web.xml are reflected immediately in the running server without needing to restart
plz any one can explain this ??????????
For classes like JSP-files:
Its debugging using JPDA.
The IDE attach via socket to the JVM your running app and hot-redeploy the not-permanent-code (aka PermGen).
There are different techiques and frameworks for that:
http://en.wikipedia.org/wiki/Java_Platform_Debugger_Architecture
It doesn't happen automatically. Check Project --> Build Automatically option. It should have been checked.
If you un-check it; then project will not be build/deployed automatically.
I was evaluating MGWT for the new mobile version of our website. So I downloaded the MGWT's showcase project and set it up in my Eclipse. I was able to compile the project and run it. I was then trying to set up the showcase to run in the Super Dev Mode environment which would help improve the development speed a lot. I followed the steps in Daniel's blog: http://blog.daniel-kurka.de/2012/07/mgwt-super-dev-mode.html.
Everything was fine. I was able to start the Codeserver. I was able to see the Super Dev Mode popup when I opened up the app. I was able to request the Codeserver to recompile and I could see the compilation messages in the console. I could also see the generated JS files of the recompilation.
However, it seemed that the Codeserver did not pick up the changes I made. I tried to change a simple text, then asked the Codeserver to recompile, but the changes did not show after the recompilation. When I checked the new generated JS files, I could see that the Codeserver still used the old code to recompile.
When I restarted the Codeserver, the changes were recompiled correctly and I could see them in the app.
If anyone has a clue of what I might have done wrong, please let me know. I appreciate your help very much.
Thanks
Just happened to find a solution to my own question:
Instead of adding the source folder to the classpath of the Codeserver run config as in Daniel's instructions, I added this source folder as part of the command line arguments using the -src argument (see here for more info).
So the arguments string for the Codeserver launch config should look like:
-bindAddress <codeserver-ip-address> -src <gwt-source-path> <gwt-module-name>
When I try to evaluate Scala worksheet in IntelliJ IDEA 12.0 with latest Scala plugin (December 5 2012 version) under Standard User in Windows 7 (32) it says:
Cannot start process, the working directory
C:\Program Files\JetBrains\IntelliJ IDEA 12.0\bin does not exist
The directory really exists.
Evaluating Scala worksheet on the same machine under Administrator account works as expected.
What do I do wrong (besides using Windows)? What can I do to fix the problem (besides using it from Admin account)?
Thank you!
It seems that IntelliJ IDEA is using the installation directory as the default working directory for Scala worksheets. You can change this setting in the Run configurations settings: go to Run > Edit configurations... Once here expand Defaults and select Worksheet. On the right pane you will see an empty text field named Working directory:
Either enter an existing directory or select one using the button on the right. The directory you specify here will be used for all the worksheets you create. You may also specify a per run configuration working directory if you enter the directory in the corresponding worksheet run configuration instead of in the Defaults section.
i have had the same issue with Scala Plugin but the problem is resolved if you use the Edit Configuration to set working directory to current working directory (basically anything other than empty)
this also happens if you do not use external build mode.
Obviously change permission to directory. Do not run any single non-daemon task under Administrator. If similar troubles continue to repeat - consider switching from deprecated OS, to something more stable, to ease your development efforts.
Please follow the issue http://youtrack.jetbrains.com/issue/SCL-5049 to get information when it will be fixed.