I am using STS 3.4 and working on a web application based on Grails framework.
When i try to use System.err.println in groovy classes it does not print anything on standard eclipse console(STS console).
Actually there are times when in print things but that is like 1 in 10, I couldn't understand this random behavior.
I am using some library that uses System.err.println for debugging purposes but i could not get any debugging info. All i need to know is where and how to get System.err.println output?
Please help me, Thanks in advance
If it is a random behaviour, it may be not-flush-ed buffered stream. Especially, that can happen when output comes from different thread.
As a solution, you can hook in into System.err dispatching (it's a stream, that you can set from outside), and overload functions, to get desired output anywhere you want. Or simply force flush it. But be careful, as it may lead to performance problems.
Consider using logging instead for more standard and configurable output. This should help you to set it up: http://groovy.codehaus.org/Logging
Related
We are in process to migrate our TB to UVM.
I am working on first IP that will be verified using UVM.
I have to find out if it is possible to reuse my uvm_sequences in SOC that remains in OVM mean time.
In case it is possible , like find example how it's done.
Thanks in advance.
You cannot mix OVM and UVM that way. You should be able to write your uvm_sequence in such a way that it work in both by simply changing your u's to o's. You would have to limit your sequence to functionality that exists in both.
If you use UVM RAL. there is a package that integrates that functionality back into OVM.
There is another package, ovm_container, that gives you the functionality of uvm_config_db.
In order to write a quick fix processor plug-in for Eclipse, ones have to write a class that implements the IQuickFixProcessor interface and overrides its two method: getCorrections and hasCorrections.
I have successfully written the codes in getCorrections and got the quick fix utility to work, but I have no clue what should I write in hasCorrections.
My prior guess is that if it return false, it indicates that the processor have no proposal to fix the current problem and otherwise if true. And consequently, I expected that upon setting it to return false, my quick fix proposal will not be shown when the problem occurs, but it is not the case: there is no different no matter it return true or false.
The source code is a bit hard to read but it looks like it is used when the quick fix code wants to know if anything has corrections, if something does have corrections it calls everything to get the corrections. If nothing has corrections it does not ask for the corrections.
Source is org.eclipse.jdt.internal.ui.text.correction.JavaCorrectionProcessor
we would like to use appassembler-maven-plugin to generate daemon scripts for our apps, we want to avoid having multiple configuratoins and generated scripts for the different environments, e.g. test, prod, etc., and would like to be able to set a jvm system property or add an extra command line argument when starting. I have been looking into this for a while ow and can't seem to find a solution.
If anybody has any ideas or suggestions they would be greatly appreciated,
thanks
You can use the extraJvmArguments to put such things as a system property. See the examples on the documentation page.
I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.
I'm not much of an Eclipse guru, so please forgive my clumsiness.
In Eclipse, when I call Assert.assertEquals(obj1,obj2) and that fails, how do I get the IDE to show me obj1 and obj2?
I'm using JExample, but I guess that shouldn't make a difference.
Edit: Here's what I see:
(source: yfrog.com)
.
Comparison with failure trace is not a easy task when your object is a little bit complex.
Comparison with debugger is useful if you have not redefined toString(). It remains still very tedious as solution because you should inspect with your eyes each objects from both sides.
Junit Eclipse plugin offers a option when there is a failure : "Compare actual With Expected TestResult". The view is close enough to classic content comparison tools :
Problem is that it is avaiable only when you writeassertEquals() with String objects (in the screenshot, we can see that the option in the corner is not proposed with no String class) :
You may use toString() on your object in assertion but it's not a good solution :
firstly, you correlate toString() with equals(Object)... modification of one must entail modification of the other.
secondly, the semantic is not any longer respected. toString() should return a useful method to debug the state of one object, not to identify an object in the Java semantic (equals(Object)).
According to me, I think that the JUnit Eclipse plugin misses a feature.
When comparison fails, even when we compare not String objects, it should offer a comparison of the two objects which rely on their toString() method.
It could offer a minimal visual way of comparing two unequals objects.
Of course, as equals(Object) is not necessarily correlated to toString(), highlighted differences should be studied with our eyes but it would be already a very good basis and anyway, it is much better than no comparison tool.
If the information in the JUnit view is not enough for you, you can always set a exception breakpoint on, for example, java.lang.AssertionError. When running the test, the debugger will stop immediately before the exception is actually being thrown.
Assert.assertEquals() will put the toString() representation of the expected and actual object in the message of the AssertionFailedError it throws, and eclipse will display that in the "failure trace" part of the JUnit view:
(source: ibm.com)
If you have complex objects you want to inspect, you'll have to use the debugger and put a breakpoint inside Assert.assertEquals()
What are you seeing?
When you do assertTrue() and it fails, you see a null.
But when you do assertEquals, it is supposed to show you what it expected and what it actually got.
If you are using JUnit, mke sure you are looking at the JUnit view and moving the mouse to the failed test.
FEST Assert will display comparison dialog in case of assertion failure even when objects you compare are not strings. I explained it in more detail on my blog.
If what you are comparing is a String then you can double click stack element and it will popup a dialog showing the diff in eclipse.
This only works with Strings though. For the general case the only way to see the real reason is to install a breakpoint and step into it.