How can I call two install4j launchers in sequence from one user app launch? - install4j

I have a tiny remote incremental update program (https://github.com/HitTheSticks/alamode), written in java, that needs to be launched before every execution of my main software (which is also written in java). It would be a bad idea to run the updater in the same process space as the main software, given that it's going to be stomping on all sorts of files that are pulled into the classpath by the main program.
Since the updater has basically no dependencies, I can simply generate a launcher with a safe context for it and run it before the main program.
But java's exec() facilities don't follow fork()/exec() semantics. Which means that for the updater to exec() the launcher will result in both JVMs running simultaneously, with all logging IO filtering through the updater. Yuck.
How can I make sure that when the user clicks [my start menu item], it always seamlessly runs [my updater]->[my program]?
I see 'com.install4j.api.launcher.ApplicationLauncher', but it doesn't specify whether or not the calling application can exit without also taking down the launched application. And preferably, I wouldn't nest the applications at all... I'd like them launched sequentially.
Oh, and it'd be cool if the solution were at least mostly portable.

Related

macOS : programmatic check if process runs as a launchDaemon or launchAgent or from command-line

I'd like to get an indication about the context in which my process is running from. I'd like to distinguish between the following cases :
It runs as a persistent scheduled task (launchDaemon/launchAgent)
It was called on-demand and created by launchd using open command-line or double-click.
It was called directly from command-line terminal (i.e. > /bin/myProg from terminal )
Perhaps is there any indication about the process context using Objective-c/swift framework or any other way ? I wish to avoid inventing the wheel here :-)
thanks
There is definetely no simple public API or framework for doing this, and doing this is hard.
Some parts of this info possibly could be retreived by your process itslef with some side-ways which will work on some system versions:
There is a launchctl C-based API, which you can try to use to enumerate all
launch daemon/agent tasks and search for your app path/pid. You may
require a root rights for your process for doing this.
Using open command-line sometimes could be traced with environment
variables it sets for your process.
Running directly from command-line could leave responsible_pid filled correctly (which is private API from libquarantine, unless you are observing it with Endpoint Security starting from 11.smth version)
All this things, except launchctl API, are not public, not reliable, could be broken at any time by Apple, and may be not sufficient for your needs.
But it is worth to take them a try, because there is nothing better :)
You could potentially distinguish all cases you want using system events monitoring from some other (root-permitted) process you control, possibly adopting Endpoint Security Framework (requires an entitlement from Apple, can't be distributed via AppStore), calling a lot of private APIs and a doing bunch of reversing tricks.
The open resource I could suggest on this topic is here

Can I open and run from multiple command line prompts in the same directory?

I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.

Is GWTTestCase obsolete? Are there better alternatives?

Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.

Does ProfileOptimization actually work?

One of the new performance enhanchements for .NET 4.5 is the introduction of the 'MultiCode JIT'.
See here for more details.
I have tried this, but it seems to have no effect on my application.
The reason why I am interested is that my app (IronScheme) takes a good long time to startup if not NGEN'd, which implies a fair amount of JIT'ng is involved at startup. (1.4 sec vs 0.1 sec when NGEN'd).
I have followed the instructions on how to enable this, and I can see a 'small' (4-12KB) is created. But on subsequent startup, it seems to have absolutely no effect on improving the startup time. It is still 1.4 sec.
Has anyone actually seen (or made) this work in practice?
Also, are there any limitations on which code will be 'tracked'? Eg: assembly loading contexts, transient assemblies, etc. I ask this as the created file never seems to grow, but I am in fact generating a fair amount of code (in a transient assembly).
One bug that I did encounter was that SetProfileRoot does not seem to understand a / as a path separator, make sure to use \ .
The rule of thumb we use at Microsoft is that Multicore JIT gets you about half way towards NGEN startup performance. Thus if your app starts in 0.1 seconds with NGEN and 1.4 seconds without NGEN, we would expect Multicore JIT startup to take about 0.75 seconds.
That being said, we had to put some limitations in place to guarantee that program execution order is the same with and without MCJ. MCJ will sometimes pause the background thread waiting for modules to be loaded by the foreground thread, and will abort background compilation if there is an assembly resolve or module resolve event.
If you want to find out what's happening in your case, we have ETW (Event Tracing For Windows) instrumentation of the MCJ feature and we will be releasing a version of PerfView soon which will be able to collect these events by if you take a trace of your app startup.
Update: PerfView has been updated to be able to show background JIT information. Here are the steps to diagnosing with the latest version (1.2.2.0):
Collect a trace using PerfView of your application startup, either using Collect->Run or Collect->Collect from the main PerfView menu.
Assuming you used Collect-> Run, put the name of your .exe in the Command text box, pick a filename (i.e. IronScheme.etl), select Background JIT from Advanced Options, and click Run Command.
Close your application and double click on the IronScheme.etl file that gets generated.
Double click on the JIT Stats view in the list underneath IronScheme.etl, you should see something like this in the view that pops up:
This process uses Background JIT compilation (System.Runtime.ProfileOptimize)
Methods Background JITTed : 2,951
Percent # Methods Background JITTed : 52.9%
MSec Background JITTing : 3,901
Percent Time JITTing is Background : 50.9%
Background JIT Thread : 11308
You can click on "View Raw Background Jit Diagnostics" to see all of the MCJ events in excel. One question I forgot to ask: are you running this on a multicore machine or multicore VM? It is a common mistake to test out MCJ in a VM that only has a single logical processor.
Calling Activator.CreateInstance during startup seems to kill MCJ?
Or rather that triggered an Assembly Resolve, which completely seems to stop MCJ. And never work after that. Maybe the MSDN docs should mention this.

A custom RMI Activator process

I am trying to implement a custom RMI activation scheme, in which remote Activatable objects will be hosted in a custom EXE process, instead of the standard Java.exe/Javaw.exe.
In RMI 'Activatable' objects can be persisted and restored or launched on demand. After an 'Activatable' object is registered with the RMI registry and requested for the first time, RMID launches a host child process (typically java.exe/javaw.exe), passes two pieces of information through the stdin of the child process and asks the child process to run the main method of a special hidden class 'sun.rmi.server.ActivationGroupInit'. This class is bootstraps everything else prepares the process to create and host instances of the 'Activatable' object. Here after the client and server communicate over RMI.
I've gotten as far as defining a simple Win32 EXE project, writing some JNI code to launch the JVM inside this EXE and managed to invoke the main method of the 'sun.rmi'server.ActivationGroupInit'. This class is able to parse stdin and extract whatever it needs to create the ActivationGroup.
However I am running into some issues that is ultimately causing the activation of the remote object to fail (with an UnknownObjectException) and I am in the process of troubleshooting it.
At this point I just wanted to take a step back and ask if anyone has attempted this before, and knows if there are any gotchas that I should know early on?
Thanks,
Ranjit
As we have discussed endlessly on the Oracle forums, you don't need any of this. Just copy java.exe or javaw.exe, or write your own wrapper that just starts a JVM using all the arguments it is passed in exactly the same way that java.exe does. You don't need to worry about what the activation system sends you on stdin etc, the existing activation classes will do all that for you.