NUnit CPU usage very high - How to resolve? - nunit

On numerous unrelated, projects the CPU usage of NUnit has often ended up being about 50% even when I'm not running my tests. From other information I've read this is supposedly more to do with my code than Nunit.
Does anyone know how I can isolate the problems in my code that will be causing this and fix them?
Thanks

I have the same problem and it seems to be rather consistently affecting only one test project doing integration testing (calling web services, checking stuff over HTTP, etc). I'm very careful to dispose of networked objects (with using(...){ }), so I don't quite understand why NUnit should continue to use 90% CPU days after the test is done with and all objects in use by the test should be disposed of.
The really strange thing is that while running the test, NUnit uses no more than 10%-50% CPU. It's only after the test has completed that CPU usage surges and stays constantly at 80%-100% forever. It's really strange. Reloading or closing the test project (File > Close) doesn't help either. NUnit itself needs to be closed.

Related

program execution is too slow in eclipse and was fast just yesterday for the same program

I am executing one java program via eclipse and I was executing the exact same program yesterday and my program execution was only taking 10 min yesterday, today the same program is taking more than an hour and I did not change any single thing in my code. could you plwase give me a solution to revert back to the old duration of my program execution that I had yesterday
If you did not change anything in your sourcecode, I see the following possible reasons for this:
Side-effects on the machine you are running the program on, like other (maybe hidden) processes soak up cpu time and slow down your program.
This could also be the machine itself being slower (slowdown from to much heat, etc.)
Your code is doing some "random" things that require longer runs sometimes (sounds unlikely tho)
Somehow eclipse is causing an issue (try to run your program without it)
Your java runtime might cause a problem (sounds unlikely aswell, but maybe updating it to the newest version can help)

Troubleshooting "Out of Memory Error: Metaspace" in Play for Scala

While working in development mode in Play for Scala (2.5.x), after around three hours of changing code and hot-deploying, Play hungs with the error java.lang.OutOfMemoryError: Metaspace.
After some research, the problem seems to be that the application instantiates Java objects (such as factories and connections) that Play is not aware of, and when Play restarts these objects stay in memory causing leaks. The solution is to clean up components when Play shuts down as explained here, or to destroy them after they are used.
The problem is that I clean up all these objects and still get OutOfMemoryError. I tried with Java's jconsole to find out what classes are creating the leak and how much memory they are taking, but couldn't find much. Any ideas how to deal with this situation? I don't want to simply increase memory without knowing what's going on.
PS: This seems to be a common issue, it would be great if Play itself provided the tools to detect the problem.
Unfortunately, this problem seems unevitable, currently. Although, it got better in Play 2.6, I still run into this.
And this has nothing to do with components not getting cleaned up, the metaspace is the place where classes are loaded. Play (dynamically) creates a lot of classes (e.g. anonymous classes) when compiling and each of those classes adds to the metaspace, which eventually gets filled up.
My suggestion would be to just increase the -XX:MaxMetaspaceSize until you can work for a few hours without this exception. Then, restart sbt once in a while. I use 500 MB currently, which seems to be fine (it is usually 128MB with the SBT launcher):
sbt -J-XX:MaxMetaspaceSize=500m
This is usually no problem in production, since you have a fixed number of classes loaded (no compilation in production).

Is GWTTestCase obsolete? Are there better alternatives?

Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.

Why shouldn't babel-node be used in production?

The babel-node docs carry a stern warning:
Not meant for production use
You should not be using babel-node in production. It is unnecessarily heavy, with high memory usage due to the cache being stored in memory. You will also always experience a startup performance penalty as the entire app needs to be compiled on the fly.
Let's break this down:
Memory usage – huh? All modules are 'cached' in memory for the lifetime of your application anyway. What are they getting at here?
Startup penalty – how is this a performance problem? Deploying a web app already takes several seconds (or minutes if you're testing in CI). Adding half a second to startup means nothing. In fact if startup time matters anywhere, it matters more in development than production. If you're restarting your web server frequently enough that the startup time is an issue, you've got much bigger problems.
Also, there is no such warning about using Babel's require hook (require('babel-register')) in production, even though this presumably does pretty much exactly the same thing as babel-node. For example, you can do node -r babel-register server.js and get the same behaviour as babel-node server.js. (My company does exactly this in hundreds of microservices, with no problems.)
Is Babel's warning just FUD, or am I missing something? And if the warning is valid, why doesn't it also apply to the Babel require hook?
Related: Is it okay to use babel-node in production
– but that question just asks if production use is recommended, and the answers just quote the official advice, i.e. "No". In contrast, I am questioning the reasoning behind the official advice.
babel-node
The production warning was added to resolve this issue :
Without the kexec module, you can get into a really ugly situation where the child_process dies but its death or error never bubbles up. For more info see https://github.com/babel/babel/issues/2137.
It would be great if the docs on babel-node explained that it is not aimed for production and that without kexec installed that it has bad behaviour.
(emphasis mine)
The link for the original issue #2137 is dead, but you can find it here.
So there seems to be two problems here :
"very high memory usage on large apps"
"without kexec installed that it has bad behaviour"
These problems lead to the production warning.
babel-register
Also, there is no such warning about using Babel's require hook (require('babel-register')) in production
There may be no warning but it is not recommanded either. See this issue :
babel-register is primarily recommended for simple cases. If you're running into issues with it, it seems like changing your workflow to one built around a file watcher would be ideal. Note that we also never recommend babel-register for production cases.
I don't know enough about babel's and node's internals to give a full answer; some of this is speculation, but the caching babel-node would do is not the same thing as the cache node does.
babel-node's cache would be another cache on top of node's require cache, and it would have to, at best, cache the resulting source code (before it's fed to node).
I believe node's cache, after evaluating a module, will only cache things reachable from the exports, or, rather, the things that are not reachable anymore will be eventually GCed.
The startup penalty will depend on the contents of your .babelrc, but you're forcing babel to do the legwork to translate your entire source code every time it is executed. Even if you implement a persistent cache, babel-node would still need to do a cache fetch and validation for each file of your app.
In development, more appropriate tools like webpack in watch mode can, after the cold start, re-translate only modified files, which would be much faster than even a babel-node with perfectly optimized cache.

OSB: Analyzing memory of proxy service

I have multiple proxies in a message flow.Is there a way in OSB by which I can monitor the memory utilization of each proxy ? I'm getting OOM, want to investigate which proxy is eating away all/most memory.
Thanks !
If you're getting OOME then it's either because a proxy is not freeing up all the memory it uses (so will eventually fail even with one request at a time), or you use too much memory per invocation and it dies over a certain threshold but is fine under low load. Do you know which it is?
Either way, you will want to generate a heap dump on OOME so you can investigate what's going on. It's annoying but sometimes necessary. A colleague had to do that recently to fix some issues (one problem was an SB-transport platform bug, one was a thread starvation issue due to a platform work manager bug, the last one due to a Muxer bug when used in exalogic).
If it just performs poorly under load, then you'll need to do the usual OSB optimisations, like use fewer Assign steps (but assign more variables per step), do a lot more in xquery rather than proxy steps, especially loops that don't need a service callout, since they can easily be rolled into a for loop in xquery; you know, all the standard stuff.