Setting system property application.secret for Play and initialization order - scala

I'm using NettyServerComponents to embed Play in my application server and I'm having problems with setting the required "application.secret" programatically.
The call I make is:
System.setProperty("application.secret", secret)
and I can verify that it is set via System.getProperty("application.secret"). However initialization will fail if I put the call inside the class that encapsulates and starts the server with this:
Exception in thread "main" #6m0lkl2h5: Configuration error
at play.api.libs.CryptoConfigParser.get$lzycompute(Crypto.scala:235)
at play.api.libs.CryptoConfigParser.get(Crypto.scala:204)
at play.api.BuiltInComponents$class.cryptoConfig(Application.scala:275)
...
If I move the same setProperty call earlier in the code, it works just fine.
Is there some import for play that causes system properties to be read and cached? Or some other reason why it's not seeing the value i can see via getProperty?

I've solved the initialization order problem and while it is rather specific to my setup, I wanted to post my answer in case someone ends up in a similar situation.
Configuration of play when using NettyServerComponents happens via this line:
lazy val configuration: Configuration = Configuration(ConfigFactory.load())
ConfigFactory.load() is part of com.typesafe.config and statically initializes configuration on first access. The daemon that I'm embedding Play in also uses this configuration via net.ceedubs.ficus.FicusConfig which means that even though the above line is a lazy initialization, my code had previously called ConfigFactory.load() for its own configuration, meaning that setting the application.secret via setProperty had no effect.
For bonus fun, since both sets of code use ConfigFactory.load(), i just had to put
application {
secret = "foo"
}
In my existing config file and Play picked it up from there.

Related

running Karma in a loop and programmatic access

It's a question more about the architecture of a program that runs karma in a CI pipeline.
I have a set of web components. They are using karma to run tests (following open-wc.org recommendations). Then I have my custom CI pipeline that allows to schedule a test of selected group of components.
When the test is scheduled it execute tests for each component one by one. However in my logs I am getting messages like
MaxListenersExceededWarning: Possible EventEmitter memory leak
detected. 12 exit listeners added to [process]. Use
emitter.setMaxListeners() to increase limit
or sometimes
listen EADDRINUSE: address already in use 127.0.0.1:9877
which breaks the test (exists the process).
I can't really pinpoint the problem so I am guessing that I am not running the test in a correct way.
On the server I am using Server class to initialize the server, then I am calling start on the server. When the callback function passed to Server constructor is called I am assuming the server is stopped and I can start over with another component. But clearly it is not the case per errors I am getting.
So the question is what would be the right way of running Karma test in a loop, one by one, using node API instead of CLI.
Update
To be specific of how I am running the tests.
I am:
Creating configuration by calling config.parseConfig where the argument is component's karma config file
Calling new Server(opts, (code) => {}) where opts are the one generated in step 1
Adding listeners for browser_complete and browser_error to generate a report and to store it into the data store
Cleaning up (removing reference for the server) when constructor callback is called
Getting next component from the queue and going back to #1
To answer my on question,
I have moved the whole logic of executing a single test to a child process and after the test finishes, but before the next test is run, I am making sure the child process is killed. No more error messages are showing up.

Not calling Cluster.close() with the Cassandra Java driver causes application to be "zombie" and not exit

When my connection is open, the application won't exit, this causes some nasty problems for me (highly concurrent and nested using a shared sesssion, don't know when each part is finished) - is there a way to make sure that the cluster doesn't "hang" the application?
For example here:
object ZombieTest extends App {
val session= Cluster.builder().addContactPoint("localhost").build().connect()
// app doesn't exit unless doing:
session.getCluster.close() // won't exit unless this is called
}
In a slightly biased answer, you could look at https://github.com/outworkers/phantom instead of using the standard java driver.
You get scala.concurrent.Future, monix.eval.Task or even com.twitter.util.Future from a query automatically. You can choose between all three.
DB connection pools are better isolated inside ContactPoint and Database abstraction layers, which have shutdown methods you can easily wire in to your app lifecycle.
It's far faster than the Java driver, as the serialization an de-serialisation of types is wired in compile time via more advanced macro mechanisms.
The short answer is that you need to have a lifecycle way of calling session.close or session.closeAsync when you shut down everything else, it's how it's designed to work.

Mono.Cecil and Unity not playing nice together

I'm trying to use Mono.Cecil to patch my custom user scripts in Unity.
I have code I want to inject into my custom user scripts in Unity to avoid writing the same lines of code in every MonoBehaviour in the project.
However, when I do:
using (AssemblyDefinition assemblyDefinition = AssemblyDefinition.ReadAssembly(assembly.Location, new ReaderParameters() { ReadWrite = true }))
{
//Do some patching here
assemblyDefinition.Write();
}
Then I get an exception saying
IOException: Win32 IO returned 1224
Which apparently means that the file is locked from being written to.
If I instead try to use:
File.Delete(sourceAssemblyPath);
File.Move(targetAssemblyPath, sourceAssemblyPath);
Then the dll gets patched correctly, but when I try to play the application then the scripts in my scene lose reference, as if the replacement of the file causes them to think the scripts on the scene objects no longer exist in the project (Which I guess would make sense since I DID delete the dll they were in to replace it with the new one).
Has anyone any idea on how to patch the user's project assembly in Unity while maintaining usability of the current project?
Or should I resort to only patching during build or something?
Suggestions?
Thanks
Last time I tried something with Cecil I was able to use a single var stream = new FileStream(path, FileMode.Open, FileAccess.ReadWrite, FileShare.ReadWrite); to both read and write the file without a delete/copy.
If you do it using [InitializeOnLoad] then the assemblies have obviously already been loaded so modifying them at that point isn't going to help; you'd need to load assemblies once to trigger Cecil than reload again to load the modifications every time you would normally reload only once. You'll want to use UnityEditor.AssemblyReloadEvents.beforeAssemblyReload instead.
beforeAssemblyReload gets called after the new assemblies are recompiled but before they are loaded. So you'd use [InitializeOnLoad] to register a callback ([DidReloadScripts] seems identical in every case I've tried) which should ensure that all newly compiled assemblies get processed going forward. In some cases this might not happen (such as if scripts need to be compiled when you first open the editor so it hasn't registered your event yet) so you'll probably also need to run your processing code immediately on initialisation as well and force an assembly reload if anything it changed using UnityEditorInternal.InternalEditorUtility.RequestScriptReload(); or UnityEditor.AssetDatabase.Refresh();.
The best way I've found to mark an assembly as processed is to inject an attribute definition and add an instance of it to the assembly, then check for it by name and skip the assembly if it exists. Without a way to do this, you'd be processing every assembly in the project every time scripts recompile rather than only processing ones that have been modified.
Edit: to process the assemblies of a build, try this:
private static bool _HasProcessed;
[PostProcessScene]
private static void OnPostProcessScene()
{
if (_HasProcessed || !BuildPipeline.isBuildingPlayer)
return;
ProcessAssemblies(#"Library\PlayerDataCache");
_HasProcessed = true;
}
[PostProcessBuild]
private static void OnPostProcessBuild(BuildTarget target, string pathToBuiltProject)
{
_HasProcessed = false;
}

Connection not available in Play for Scala

I have the following configuration in application.conf for an in-memory database HSQLDB:
db {
inmemory {
jndiName = jndiInMemory
driver = org.hsqldb.jdbc.JDBCDriver
url = "jdbc:hsqldb:mem:inmemory"
}
}
And connect in a controller with the following statements
val database = Database.forName("jndiInMemory")
val session = database.createSession
val conn = session.conn
// JDBC statements
Problem is that when the code runs several times, I get an exception in session.conn:
HikariPool-34 - Connection is not available, request timed out after
30000ms.
Since I'm using JNDI, I figured that the connections are reused. Do I have to drop the session after I finish using it? How to fix this code?
Hard to tell without looking at the actual code but in general: when you create a database connection at the start of the application, you usually reuse it until application ends - then you should close connection.
If you spawn a new connection every time you do query, without ending previous ones you will run at the connection limit pretty fast.
Easy to use pattern is: create a session in the beginning and then use dependency injection to pass it to wherever you need to run it.
BTW, I noticed that for some configurations e.g. Slick create connection statically (as in: stores them as static class properties). So, you need to create a handler, that closes session when application exits. It run ok... until you start it several times over in SBT, which by default uses the same JVM to run itself and spawned application. In such cases it is better to run things as fork. For tests I use Test / fork := true, for run I use sbt-revolver, though I am not sure how that would play out with Play.

Delphi: App initialization - best practices / approach

I run into this regularly, and am just looking for best practice/approach. I have a database / datamodule-containing app, and want to fire up the database/datasets on startup w/o having "active at runtime" set to true at design time (database location varies). Also run a web "check for updates" routine when the app starts up.
Given TForm event sequences, and results from various trial and error, I'm currently using this approach:
I use a "Globals" record set up in the main form to store all global vars, have one element of that called Globals.AppInitialized (boolean), and set it to False in the Initialization section of the main form.
At the main form's OnShow event (all forms are created by then), I test Globals.AppInitialized; if it's false, I run my "Initialization" stuff, and then finish by setting Globals.AppInitialized := True.
This seems to work pretty well, but is it the best approach? Looking for insight from others' experience, ideas and opinions. TIA..
I generally always turn off auto creation of all forms EXCEPT for the main form and possibly the primary datamodule.
One trick that I learned you can do, is add your datamodule to your project, allow it to auto-create and create BEFORE your main form. Then, when your main form is created, the onCreate for the datamodule will have already been run.
If your application has some code to say, set the focus of a control (something you can't do on creation, since its "not visible yet") then create a user message and post it to the form in your oncreate. The message SHOULD (no guarantee) be processed as soon as the forms message loop is processed. For example:
const
wm_AppStarted = wm_User + 101;
type
Form1 = class(tForm)
:
procedure wmAppStarted(var Msg:tMessage); message wm_AppStarted;
end;
// in your oncreate event add the following, which should result in your wmAppStarted event firing.
PostMessage(handle,wm_AppStarted,0,0);
I can't think of a single time that this message was never processed, but the nature of the call is that it is added to the message queue, and if the queue is full then it is "dropped". Just be aware that edge case exists.
You may want to directly interfere with the project source (.dpr file) after the form creation calls and before the Application.Run. (Or even earlier in case.)
This is how I usually handle such initialization stuff:
...
Application.CreateForm(TMainForm, MainForm);
...
MainForm.ApplicationLoaded; // loads options, etc..
Application.Run;
...
I don't know if this is helpful, but some of my applications don't have any form auto created, i.e. they have no mainform in the IDE.
The first form created with the Application object as its owner will automatically become the mainform. Thus I only autocreate one datamodule as a loader and let this one decide which datamodules to create when and which forms to create in what order. This datamodule has a StartUp and ShutDown method, which are called as "brackets" around Application.Run in the dpr. The ShutDown method gives a little more control over the shutdown process.
This can be useful when you have designed different "mainforms" for different use cases of your application or you can use some configuration files to select different mainforms.
There actually isn't such a concept as a "global variable" in Delphi. All variables are scoped to the unit they are in and other units that use that unit.
Just make the AppInitialized and Initialization stuff as part of your data module. Basically have one class (or datamodule) to rule all your non-UI stuff (kind of like the One-Ring, except not all evil and such.)
Alternatively you can:
Call it from your splash screen.
Do it during log in
Run the "check for update" in a background thread - don't force them to update right now. Do it kind of like Firefox does.
I'm not sure I understand why you need the global variables? Nowadays I write ALL my Delphi apps without a single global variable. Even when I did use them, I never had more than a couple per application.
So maybe you need to first think why you actually need them.
I use a primary Data Module to check if the DB connection is OK and if it doesn't, show a custom component form to setup the db connection and then loads the main form:
Application.CreateForm(TDmMain, DmMain);
if DmMain.isDBConnected then
begin
Application.CreateForm(TDmVisualUtils, DmVisualUtils);
Application.CreateForm(TfrmMain, frmMain);
end;
Application.Run;
One trick I use is to place a TTimer on the main form, set the time to something like 300ms, and perform any initialization (db login, network file copies, etc). Starting the application brings up the main form immediately and allows any initialization 'stuff' to happen. Users don't startup multiple instances thinking "Oh..I didn't dbl-click...I'll do it again.."