Eclipse plugin to measure programmer performance/stats - eclipse

Does anyone know of an Eclipse plugin that can give me some stats about my behaviour/usage of the Eclipse IDE?
There are quite a few things I would like to know:
How often/when do I invoke the "Build All" command (through Ctrl+B)
How often does compilation fail/succeed (+ number of errors/warnings)
How often do I hit Backspace? (I do that way to often; If pressing that key would give a nasty sound I would in time learn to type correctly in the first place)
How many characters/lines of code that I typed do I delete (possibly quite immediately)
How (effective/efficient/...) is my Mouse/Keyboard/IDE usage? (Kinda like measuring APM in StarCraft; this could be fun)
If there is no such Eclipse plugin around, how complex and time consuming would It be to write a plugin that can accomplish the above?
edit:
I am interested in these usage stats since I noticed that a good IDE and a fast computer strongly influenced my behaviour when coding over the past years.
I use Content Assist all the time, its very practical but now I notice, that I would be totally unable to get anything done without it.
Hitting Backspace has become almost a reflex :) and pressing Ctrl+B almost too.
I now routinely hit Ctrl+B to do an incremental build sometimes even after changing a few lines, so that the compiler gives immediate feedback and since compiling is quite fast nowadays it works quite well. This keeps me from actually having to think on my own, when I do stuff wrong, I now rely on the compiler to spot the errors, I have become worse at spotting them myself since I don't really need to do that any longer.
Did you guys notice these changes in yourself too?

The best tool in this area(in my opinion) for eclipse is Lack of Progress Bar ,it doesn't have all the feature that you request, but allow to measure the developer performance and the bottlenecks of the development process.
What is Lopb?
Lack of Progress Bar (Lopb) is an Eclipse plugin that tracks how long
developers wait for background jobs to complete. By benchmarking the
performance of background jobs, Lopb provides developers with metrics
on how much of their day was wasted due to overhead introduced by the
development tools and infrastructure that they depend on or access
through their IDE.

Have a look at mousefeed, it will help you move from using the mouse to keyboard. Not sure if it can keep stats of your usage.
As for other stats, look at Usage Data Collector, it will keep track of all Eclipse usage, favorite view, perspective usage, most common errors, etc.
Dont know of anyone that actually keeps track of how you type in the editor, but why would you want that? Part of the development process is to change things alot imho. Focus on the end result instead and look at some static code analyzers.

Related

Paster vs. ArchGenXML

I questionning myself : is it better to use Paster for creating content types, browser view, portlet, etc... or ArchGenXML ?
Which one of those two create the better source code ?
Is there an advantage of using one or the other ?
Thanks.
They're two quite different things.
Paster creates an initial skeleton. Once it's done, you're on your own.
ArchGenXML needs a UML model first, and then it can create quite complex systems of code. As long as you modify your code only within prescribed regions of the .py file, you can change your model and rerun ArchGenXML as often as you wish.
Either one only generates code as good as its authors have provided, and while I use ArchGenXML extensively, I see a fair bit of deprecated code generated. otoh, I've never seen it generate completely invalid code.
I use ArchGenXML because I like having my original source in UML
Just to give you another point of view:
I think that ArchGenXML is a tool for someone that doesn't really want to get his/her hands dirty for as long as possible a tool that let you take a little more time planning and a little less time coding (I see this as a negative point) . Paster on the contrary is just a comfort to speed up some work, and you will get your hands dirty very soon.
I started coding in Plone using paster, then after a while when I felt secure, I abandoned it (like the babywalker :D ). And then you learn to run, and after some time you get old and lazy and you realize that, after all, paster is still your friend.
ArchGenXML, on the other hand, I think that's an obstruction for your learning.
Plus, if you use paster and in the end of the project your code sucks you can blame just yourself (I see this as a good point).
These are just my 2ยข.

Should upgrading tools / frameworks / dependencies to latest version be automatic?

At my current job, it goes without question that if a new version of a technology that we use in our project is released, we upgrade it ASAP. At my previous job, that was not the case... we had to convince management that it was necessary. As such, we often had to do without features that could have been helpful and continue living with bugs that had long ago been fixed. At times, it was even hard to get support for the old versions we were using. I don't really see that point of view, especially after experiencing the opposite approach. Are there really 2 sides to this question?
Of the two approaches, I absolutely would prefer the one where you are now. Having applications falling behind can be painful for many reasons, some of which you noted.
The only caveats would be centered around time, really; it would usually take some non-trivial amount of time to update something for new frameworks/dependencies. It's nice when frameworks maintain backward-compatibility, but that does not always happen.
Breaking changes are usually obvious, and usually (we hope) exist for some very good reasons. More troublesome are the silent changes that do not prevent building, but cause subtle bugs; like a library function with the same signature, but which has slightly different behavior or return results.
But if an application is meant to be supported long-term, keeping it up-to-date is really a must, IMO.
There is a grey area that lies somewhere in the middle.
Your old place lived by a "don't rock the boat" approach. Yes the stuff might not be up to date but they knew what it could do (and maybe couldn't) and how to handle it. If theres an issue you hopefully know its not the kit as its been around the block (or you've been around the block find out all the bugs in it and how to handle them and keep your kit up and running).
Your new place puts all their faith in the fact that the newer kit must be better and can't possibly have been released without lots of checks and balances. Yes there might be some quirks but knowing some of those old bugs are no more (well the release docs say its fixed anyway) is worth the time spend finding new bugs in the latest release.
Its a fine line to tread and depends very much on what the tech is used for and how mission critical it is.
Yes, there certainly are two sides to this question. I'll weigh in on the side of not upgrading whenever a new version is released ...
If it ain't broke don't fix it. If your system is working then any
change is a risk, not all risks are
worth taking.
If you upgrade every time a new
release of any component pops up you
will find yourselves following other
people's schedules.
Change management is vital discipline for robust and reliable systems.

Pair programming, mixed IDE environments?

Anyone got any experience of teams doing pair programming where there is a mixed IDE environment? I'm a long time IntelliJ user, others use Eclipse, which you may have heard of.
In my mind pair programming involves a lot of passing the keyboard between the programmers. But every time I get the keyboard I grind to a halt as I don't know to do anything anymore. (It's like suddenly I'm an idiot!)
Now I could, probably should, learn my way round Eclipse. (Not starting a holy war here about relative merits.) But I wonder if anyone else has got an opinion?
I don't see the need for passing the keyboard around. In my view, you work on part while the other half of your pair looks over your shoulder. Sometimes I imagine you would have to take the wheel, but generally not every 10 minutes. If he types for 4 hours, then you switch places, just switch IDEs at that time.
I agree you should learn the tools that are used, and if there is an actual published or documented standard you should follow it, but if you are allowed to use any IDE you want, then I don't see an issue. But if it inhibits your ability to deliver, then maybe you pair up with someone using the same IDE as you.
About 10 years too late for the OP, but this question is still highly ranked in search engines, so others interested in remote mixed environment pair programming can try CodeTogether. It's available for for IntelliJ, Eclipse, VS Code and IDEs based on them.
Participants join in a browser, but get a full IDE-like experience with IntelliSense, validation, reference searches, navigation, etc. CodeTogether is simple, fast, free, anonymous, and encrypted. The plugins/extensions are in the normal marketplaces/registries you'd expect and are also available on the website.
Full disclosure: I work for Genuitec, the makers of CodeTogether, and we really hope you enjoy it. Any constructive feedback on Gitter or GitHub is always appreciated.
I have not done this in a multi-IDE environment. But pairing is, to my mind, far and away the best way to learn IDE features. So you should come up to speed quickly on Eclipse, and your colleagues, likewise, should get a handle on IntelliJ in short order. Both of you will become better versed in both environments - and that's a good position from which to settle on a team IDE, should you choose to do so.
By comparison with other means of learning, pairing teaches you the features that are useful to you (or your pair, who probably has a similar set of needs). You learn almost by osmosis; as your pair uses a feature you may find yourself asking, "how did you do that?" or "what did you just do?" This is teaching you the features you need, exactly when you need them.
In your situation, there may be additional value: you may find yourself wanting a feature that your IDE offers; your pair may never have encountered it (but it might be in Eclipse, too). So you spend a minute tracking down that feature, and now both of you have learned new (and useful) functionality of the IDE.
Standardize your environment! As much as you need a common source style, I would argue you also need a common way of working, including having a common IDE. All kinds of settings, knowledge, plugins, etc. is much easier to share, including your example about pair programming.
In pair programming, the pair should standardize on an IDE.
My suggestion would be either to pair with another IntelliJ user or, if the rest of the group is on Eclipse, start learning Eclipse.
You're going to lose too much time switching between IDEs to gain the efficiencies of pair programming.
You could have both IDEs loaded on the pairing machine and switch between them as needed, but I'd recommend standardizing IDEs with your pairing partner. You might want to bring this question up in your next retrospective and see what the team consensus is.

What IDE features should I learn to use

I'm a largely self-taught front-end developer only just making the transition into back-end development in order to be able to say yes to more projects.
I've found eclipse to be my favourite text editor for javascript and php, but I'm conscious that it (and other IDEs) have a whole load of features which I don't know how to use, or why I should want to use them.
I'd really appreciate some pointers on why using such-and-such a feature of an IDE helps you work more efficiently, write better code etc..., and maybe some links to useful sources of information.
Cheers
edit - I'm already converted to using ftp features and code explorer/function lists
You may find eclipse tips such as these interesting. But if your objective is to "write better code" then I think you need to look elsewhere. Understand the language you are using better, understand design patterns and the reasons whey people apply them, study testing techniques. There's so much else to spend your time on. Truly working smarter is the objective.
I would always advise learning what goes on behind the IDE and then using the IDE.
Get familiar with:
Build/Distribution processes (Like Make and others)
How compilation works, what are the component processes
How the IDE is generating things like autocomplete (scanning headers/source)
version control, get familiar with it on the command-line. It will mean you can deal with issues/requirements not filled by the IDE.
Once you know what goes on behind the scenes for the language/environment you are programming in ... the IDE is a bit mundane, just a modular text-editor on steroids.
Good luck
Maybe this is obvious, but in my opinion class/function/variable name refactoring is among the most essential features of any IDE. Constant refactoring is one of the secrets of making good code.
That's a bit of a difficult question to answer since most modern IDEs offer such a wide range of features. From a general standpoint, I'd become familiar with hot key combinations for repetitive tasks (saving, building, code folding, etc.) and how to install/enable/disable add-ons and plug-ins. That will make you more efficient.
As Aiden mentions, knowing how to to a build from the command line/compilation in general will be useful as well as version control systems. Get familiar with GIT and Subversion.
The IDE will not make you write better code. For that, you're going to need practice and some time spent reading/listening to podcasts. Read Robert Martin's "Clean Code" for starters.
Additionally, spend the time to learn proper TDD and the toolset(s) available for your IDE.

Getting your head around other people's code

I'm occasionally unfortunate enough to have to make alterations to very old, poorly not documented and poorly not designed code.
It often takes a long time to make a simple change because there is not much structure to the existing code and I really have to read a lot of code before I have a feel for where things would be.
What I think would help a lot in cases like this is a tool that would allow one to visualise an overview of the code, and then maybe even drill down for more detail. I suspect such a tool would be very hard to get right, given that is trying to find structure where there is little or none.
I guess this is not really a question, but rather a musing. I should make it into a question - What do others do to assist in getting their head around other peoples code, the good and the bad?
Hmm, this is a hard one, so much to say so little time ...
1) If you can run the code it makes life soooo much easier, breakpoints (especially conditional) break points are you friend.
2) A purists' approach would be to write a few unit tests, for known functionality, then refactor to improve code and understanding, then re-test. If things break, then create more unit tests - repeat until bored/old/moved to new project
3) ReSharper is good at showing where things are being used, what's calling a method for instance, it's static but a good start, and it helps with refactoring.
4) Many .net events are coded as public, and events can be a pain to debug at the best of times. Recode them to be private and use a property with add/remove. You can then use break point to see what is listening on an event.
BTW - I'm playing in the .Net space, and would love a tool to help do this kind of stuff, like Joel does anyone out there know of a good dynamic code reviewing tool?
I have been asked to take ownership of some NASTY code in the past - both work and "play".
Most of the amateurs I took over code for had just sort of evolved the code to do what they needed over several iterations. It was always a giant incestuous mess of library A calling B, calling back into A, calling C, calling B, etc. A lot of the time they'd use threads and not a critical section was to be seen.
I found the best/only way to get a handle on the code was start at the OS entry point [main()] and build my own call stack diagram showing the call tree. You don't really need to build a full tree at the outset. Just trace through the section(s) you're working on at each stage and you'll get a good enough handle on things to be able to run with it.
To top it all off, use the biggest slice of dead tree you can find and a pen. Laying it all out in front of you so you don't have to jump back and forward on screens or pages makes life so much simpler.
EDIT: There's a lot of talk about coding standards... they will just make poor code look consistent with good code (and usually be harder to spot). Coding standards don't always make maintaining code easier.
I do this on a regular basis. And have developed some tools and tricks.
Try to get a general overview (object diagram or other).
Document your findings.
Test your assumptions (especially for vague code).
The problem with this is that on most companies you are appreciated by result. That's why some programmers write poor code fast and move on to a different project. So you are left with the garbage, and your boss compares your sluggish progress with the quick and dirtu guy. (Luckily my current employer is different).
I generally use UML sequence diagrams of various key ways that the component is used. I don't know of any tools that can generate them automatically, but many UML tools such as BoUML and EA Sparx can create classes/operations from source code which saves some typing.
The definitive text on this situation is Michael Feathers' Working Effectively with Legacy Code. As S. Lott says get some unit tests in to establish behaviour of the lagacy code. Once you have those in you can begin to refactor. There seems to be a sample chapter available on the Object Mentor website.
I strongly recommend BOUML. It's a free UML modelling tool, which:
is extremely fast (fastest UML tool ever created, check out benchmarks),
has rock solid C++ import support,
has great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
is full featured, intensively developed (look at development history, it's hard to believe that so fast progress is possible).
So: import your code into BOUML and view it there, or export to SVG and view it in Firefox.
See Unit Testing Legacy ASP.NET Webforms Applications for advice on getting a grip on legacy apps via unit testing.
There are many similar questions and answers. Here's the search https://stackoverflow.com/search?q=unit+test+legacy
The point is that getting your head around legacy is probably easiest if you are writing unit tests for that legacy.
I haven't had great luck with tools to automate the review of poorly documented/executed code, cause a confusing/badly designed program generally translates to a less than useful model. It's not exciting or immediately rewarding, but I've had the best results with picking a spot and following the program execution line by line, documenting and adding comments as I go, and refactoring where applicable.
a good IDE (EMACS or Eclipse) could help in many cases. Also on a UNIX-platform, there are some tools for crossreferencing (etags, ctags) or checking (lint) or gcc with many many warning options turned on.
First, before trying to comprehend a function/method, i would refactor it a bit to fit your coding conventions (spaces, braces, indentation) and remove most of the comments if they seem to be wrong.
Then I would refactor and comment the parts you understood, and try to find/grep those parts over the whole source tree and refactor them there also.
Over the time, you get a nicer code, you like to work with.
I personally do a lot of drawing of diagrams, and figuring out the bones of the structure.
The fad de jour (and possibly quite rightly) has got me writing unit tests to test my assertions, and build up a safety net for changes I make to the system.
Once I get to a point where I'm comfortable enought knowing what the system does, I'll take a stab at fixing bugs in the sanest way possible, and hope my safety nets neared completion.
That's just me, however. ;)
i have actuaally been using the refactoring features of ReSharper to help m get a handle on a bunch of projects that i inherited recently. So, to figure out another programmer's very poorly structured, undocumented code, i actually start by refactoring it.
Cleaning up the code, renaming methods, classes and namespaces properly, extracting methods are all structural changes that can shed light on what a piece of code is supposed to do. It might sound counterintuitive to refactor code that you don't "know" but trut me, ReSharper really allows you to do this. Take for example the issue of red herring dead code. You see a method in a class or perhaps a strangely named variable. You can start by trying to lookup usages or, ungh, do a text search, but ReSharper will actually detect dead code and color it gray. As soon as you open a file you see in gray and with scroll bar flags what would have in the past been confusing red herrings.
There are dozens of other tricks and probably a number of other tools that can do similar things but i am a ReSharper junky.
Cheers.
Get to know the software intimately from a user's point of view. A lot can be learnt about the underlying structure by studying and interacting with the user interface(s).
Printouts
Whiteboards
Lots of notepaper
Lots of Starbucks
Being able to scribble all over the poor thing is the most useful method for me. Usually I turn up a lot of "huh, that's funny..." while trying to make basic code structure diagrams that turns out to be more useful than the diagrams themselves in the end. Automated tools are probably more helpful than I give them credit for, but the value of finding those funny bits exceeds the value of rapidly generated diagrams for me.
For diagrams, I look for mostly where the data is going. Where does it come in, where does it end up, and what does it go through on the way. Generally what happens to the data seems to give a good impression of the overall layout, and some bones to come back to if I'm rewriting.
When I'm working on legacy code, I don't attempt to understand the entire system. That would result in complexity overload and subsequent brain explosion.
Rather, I take one single feature of the system and try to understand completely how it works, from end to end. I will generally debug into the code, starting from the point in the UI code where I can find the specific functionality (since this is usually the only thing I'll be able to find at first). Then I will perform some action in the GUI, and drill down in the code all the way down into the database and then back up. This usually results in a complete understanding of at least one feature of the system, and sometimes gives insight into other parts of the system as well.
Once I understand what functions are being called and what stored procedures, tables, and views are involved, I then do a search through the code to find out what other parts of the application rely on these same functions/procs. This is how I find out if a change I'm going to make will break anything else in the system.
It can also sometimes be useful to attempt to make diagrams of the database and/or code structure, but sometimes it's just so bad or so insanely complex that it's better to ignore the system as a whole and just focus on the part that you need to change.
My big problem is that I (currently) have very large systems to understand in a fairly short space of time (I pity contract developers on this point) and don't have a lot of experience doing this (having previously been fortunate enough to be the one designing from the ground up.)
One method I use is to try to understand the meaning of the naming of variables, methods, classes, etc. This is useful because it (hopefully increasingly) embeds a high-level view of a train of thought from an atomic level.
I say this because typically developers will name their elements (with what they believe are) meaningfully and providing insight into their intended function. This is flawed, admittedly, if the developer has a defective understanding of their program, the terminology or (often the case, imho) is trying to sound clever. How many developers have seen keywords or class names and only then looked up the term in the dictionary, for the first time?
It's all about the standards and coding rules your company is using.
if everyone codes in different style, then it's hard to maintain other programmer code and etc, if you decide what standard you'll use have some rules, everything will be fine :) Note: that you don't have to make a lot of rules, because people should have possibility to code in style they like, otherwise you can be very surprised.