How bad is SLOC (source lines of code) as a metric? [closed] - code-metrics

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We are documenting our software development process. For technical people, this is pretty easy: iterative development with internal milestones every four weeks, external every 3 months.
However, the purpose of this exercise is to expose things for our project management in terms that they can understand. Specifically, these non-technical managers need metrics that they can understand.
I understand our options for metrics well and have proposed a whole set (requirements met and actual costs vs. budgeted costs are two of my favorites). However, we do have some old hands involved and they tend to hang onto metrics like SLOC.
I understand the temptation of SLOC: it seems easy for non-software people to understand and it seems like the closest analog of a physical thing (it's just like counting punched cards back in the old days!).
So here's the question: how can I explain the dangers of SLOC to a non-technical person?
Here's some concrete motivation: we work on a fairly mature deployed system that has years of history behind it. As we add features, SLOC tends to stay approximately level or even decrease (refactoring removes old / dead code, new features are really just adjustments of existing, etc). To a non-programmer manager, a non-increasing SLOC in a development project is perplexing at best....
Clarifying in response to a recent answer below: remember, I'm arguing that SLOC is a bad metric for the purposes of measuring project progress. I'm not arguing that it is a number that's not worth collecting. It requires extensive context to do anything useful with it and most program managers don't have that context.

Someone said :
"Using SLOC to measure software progress is like using kg for measuring progress on aircraft manufacturing"
It is totally inappropriate as it encourages bad practices like :
Copy-Paste-Syndrome
discourage refactoring to make things easier
Stuffing with meaningless comments
...
The only use is that it can help you to estimate how much paper to put in the printer when you do a printout of the complete source tree.

The issue with SLOC is that it's an easy metric to game. Being productive does not equate to producing more code. So the way I've explained it to people baring what Skilldrick said is this:
The more lines of code there are the more complicated something gets.
The more complicated something gets, the harder it is to understand it.
Before I add a new feature or fix a bug I need to understand it.
Understanding takes time.
Time costs money.
Smaller code -> easier to understand -> cheaper to add new features
Bean counters can understand that.

Show them the difference between:
for(int i = 0; i < 10; i++) {
print i;
}
and
print 0;
print 1;
print 2;
...
print 9
And ask them whether 10 SLOC or 3 SLOC is better.
In response to the comments:
It doesn't take long to explain how a for loop works.
After you show them this, say "we now need to print numbers up to 100 - here's how you make that change." and show how much longer it takes to change the non-DRY code.

I disagree on SLOC being a bad metric. It may be moot to go into a years-old question with eleven answers, but I'll still add another.
Most arguments call it a bad metric because it is not suited to directly measure productivity. That is a strange argument; it assumes the metric to be used in an insane way. With this reasoning, one could call the Kelvin a bad unit because it is unsuited to measure distance.
Code length is a viable measure of ballast.
The amount of non-comment code lines correlates with:
undetected errors
maintenance costs
training time for new contributors
migration costs
new feature costs
and many more similar kinds of costs, like the cost of optimization.
Of course SLOC count isn't a precise measure of any of these. Code can be anywhere between very nice and very ugly to manage. But it can be assumed that code length is rarely free, and thus, longer code is often harder to manage.
If I were managing a team of programmers, I would very much want to keep track of the ballast it creates or removes.

Explain that SLOC is an excellent measurement of the lines of code in the application, nothing else. The number of lines in a book, or the length of a film doesn't determine how good it is. You can improve a film and shorten it, you can improve an application and reduce the lines of code.

Pretty bad (-:
A much better idea would to cover the test cases, rather than code.
The idea is this: a developer should commit a test case that fails, then commit the fix in next build, and the test case should pass ... just measure how many test cases the developer added.
As a bonus collect coverage stats (branch coverage is better than line coverage here).

You don't judge how good(how many features,how it performs..) a plane is based on its weight(sloc).
When you want your plane to fly higher, longer and perform better, you don't add weight to it. You replace parts of it with lighter/better materials. You strip off parts you don't need as to not add unnecessary weight.

I believe SLOC is a great metric. It tells you how large your system is. That is good for judging complexity and resources. And it helps you prepare the next developer for working on a codebase.
But SLOC count should be analyzed only AFTER other appropriate code quality metrics have been applied. So...
Do NOT write 2 lines of code when 1 will do, unless the 2-line
version makes the code 2 times easier to maintain.
Do NOT fluff code with unnecessary comments just to fluff SLOC count.
Do NOT pay people by SLOC count.
I have been managing software projects for 30 years. I use SLOC count all the time, to help understand mature systems. I have never found it useful to even glance at SLOC count until a project is near version 1.0 release.
Basically, during the development process, I worry about quality, performance, usability, and conformance to specifications. Get those right, and the project will probably be a success. When the dust settles, look at SLOC count. You might be surprised that you got SO much out of 5,000 lines of code. And you might be surprised that you got SO little! (But SLOC count does not affect quality, performance, usability, and conformance to specification.)
And always code like the person who will be working on your code next is a violent psychopath who knows where you live.
Cheers,
Uncle Chip

even modern code metrics tools criticize SLOC conting, i like the point made in the ProjectCodeMeter FAQ:
What's wrong with counting Lines Of Code (SLOC / LLOC)?

Why SLOC is bad as an individual metric of productivity
Think of code as a block of clay/stone. You need to carve, say 10 statues. It's not how many statues you carve that counts. It's how well you've carved it that counts. Similarly it's not how many lines you've written but how well they are functioning. In case of code LOC can backfire as a metric this way.
Productivity also changes when writing a complex piece of code. It takes a second to write a print statement but a lot of time to write a complex piece of logic. Not all fingers are equal.
How SLOC can be used to your benefit
I think SLOC for defect % is a good metric. Yes the difficulty level comes into play but this is a good parameter that the managers can throw around while doing business. Try to think from their perspective too. They don't hate you or your work, but they need to tell customers that you're the best and for that they need something tangible. Give them what you can :)

SLOC can be changed dramatically by putting extra empty lines ("for readability") or by putting or removal of comments. So relying on SLOC only can lead to confusion.

Why don't they understand that the SLOC hasn't changed, but the software does more than it did yesterday because you've added new features, or fix bugs?
Now explain it to them like this. Measuring how much work was done in your code by comparing the lines of code is the same as measuring how many features are in your cell phone comparing it by size. Cell phones have decreased in size over 20 years time while adding more features because of technological improvements and techniques. Good code follows this same principal as we can express the same logic in fewer and fewer lines of code, making it faster to run, easier to maintain, and simpler to understand as we improve our understanding of the problem and introduce new techniques for development.
I would get them to focus on the business value returned through feature development, maintenance, and bug fixes. If whoever is happy with the software says they can see improvement don't sweat the SLOC.
Go read this:
https://stackoverflow.com/questions/3800707/what-is-negative-code

Related

How to implement deterministic single threaded network simulation

I read about how FoundationDB does its network testing/simulation here: http://www.slideshare.net/FoundationDB/deterministic-simulation-testing
I would like to implement something very similar, but cannot figure out how they actually did implement it. How would one go about writing, for example, a C++ class that does what they do. Is it possible to do the kind of simulation they do without doing any code generation (as they presumeably do)?
Also: How can a simulation be repeated, if it contains random events?? Each time the simulation would require to choose a new random value and thus be not the same run as the one before. Maybe I am missing something here...hope somebody can shed a bit of light on the matter.
You can find a little bit more detail in the talk that went along with those slides here: https://www.youtube.com/watch?v=4fFDFbi3toc
As for the determinism question, you're right that a simulation cannot be repeated exactly unless all possible sources of randomness and other non-determinism are carefully controlled. To that end:
(1) Generate all random numbers from a PRNG that you seed with a known value.
(2) Avoid any sort of branching or conditionals based on facts about the world which you don't control (e.g. the time of day, the load on the machine, etc.), or if you can't help that, then pseudo-randomly simulate those things too.
(3) Ensure that whatever mechanism you pick for concurrency has a mode in which it can guarantee a deterministic execution order.
Since it's easy to mess all those things up, you'll also want to have a way of checking whether determinism has been violated.
All of this is covered in greater detail in the talk that I linked above.
In the sims I've built the biggest issue with repeatability ends up being proper seed management (as per the previous answer). You want your simulations to give different results only when you supply a different seed to your random number generators than before.
After that the biggest issue I've seen seems tends to be making sure you don't iterate over collections with nondeterministic ordering. For instance, in Java, you'd use a LinkedHashMap instead of a HashMap.

Tool or technique to compare and group diffs by similarity

I have developed a system that allows visitors to submit typo corrections for my blog. It works by having a small client-side app which then sends unified diffs to a server. Behind that, I have an interface which allows me to see all diffs in a nice graphical way, sort them, etc.
However I am thinking that as time passes, many visitors will submit corrections for the same things before I have time to fix them. So I would need a way to group similar or identical diffs together.
Identical diffs are easy enough. But there might be people who fix errors differently, e.g. using American or British spellings, different rules for punctuation, varying understandings of unclear phrases, that kind of thing. Grouping similar diffs would be tremendously helpful.
Are there techniques, algorithms, or tools that are specifically designed or can be used to compute the similarity of diffs?
I believe that you have two problems to solve: 1. recognizing fixes for the same text (e.g. same typo location), 2. potentially remove those with the same or nearly equal solutions and at least group all the patches that are related to that location.
Problem 1. The unified diff format is somewhat OK as it gives the lines, but a word level or character level diff (for example, counting each word as a line as wdiff does) might be more precise and help you group more precisely the patches.
Problem 2. if the patches are identical, as you noted it is trivial, if they are different, solving the problem 1 already did much of the work. You can of course use a normalization such as "inflected word parts removal" (removing 's', 'ing' and so on at end of words for example) or "lower casing" before the comparison the replacements part in the unified diffs, thus helping group together nearly identical solutions.
The problem 1 is the problem paused by integration or merge of patches. Problem 2 is more relevant to your particular case.
Maybe you could adopt the Damerau-Levenshtein algorithm. It is used to calculate the distance between two strings.

Set custom production firing time in ACT-R

When defining a model in ACT-R, I would like to set for each of my productions, a different firing time.
How could I do that?
Thanks!
Not too many ACT-R modelers here, huh?
First off, keep a copy of the ACT-R reference manual handy. This a great resource that answers 90% of the questions you will have.
You can set a production's action time using (spp <production-name> :at <time>) or you can set the default action time using (sgp :dat <time>). Times are in seconds, so the default is .05.
That being said, you should modify these parameters very rarely, if at all. The whole point of production firing time is that it's supposed to represent a psychological constant. If you're tinkering with this, your model may fit the data but is less likely to be psychologically plausible. And if you don't care about psychological plausibility, then you shouldn't be using ACT-R! But there's an exception to every rule, so proceed with caution.
While this is a bit old, this question still comes up fairly high on Google when searching for ACT-R production firing times, so I feel it is acceptable to post a response.
As a published ACT-R modeler with 4 years under my belt, I would like to echo Jeff's statements. You very, very rarely modify most ACT-R parameters for the exact reason Jeff stated. All aspects of ACT-R and the amount of time certain modules take to fire are empirically backed by many studies. If you start changing these, then your model, like Jeff said, is completely implausible. While some modelers do change these values, they have empirical data to back up their reasons for changing any parameters.

Looking for examples where knowledge of discrete mathematics is helpful [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Inspired after watching Michael Feather's SCNA talk "Self-Education and the Craftsman", I am interested to hear about practical examples in software development where discrete mathematics have proved helpful.
Discrete math has touched every aspect of software development, as software development is based on computer science at its core.
http://en.wikipedia.org/wiki/Discrete_math
Read that link. You will see that there are numerous practical applications, although this wikipedia entry speaks mainly in theoretical terms.
Techniques I learned in my discrete math course from university helped me quite a bit with the Professor Layton games.
That counts as helpful... right?
There are a lot of real-life examples where map coloring algorithms are helpful, besides just for coloring maps. The question on my final exam had to do with traffic light programming on a six-way intersection.
As San Jacinto indicates, the fundamentals of programming are very much bound up in discrete mathematics. Moreover, 'discrete mathematics' is a very broad term. These things perhaps make it harder to pick out particular examples. I can come up with a handful, but there are many, many others.
Compiler implementation is a good source of examples: obviously there's automata / formal language theory in there; register allocation can be expressed in terms of graph colouring; the classic data flow analyses used in optimizing compilers can be expressed in terms of functions on lattice-like algebraic structures.
A simple example the use of directed graphs is in a build system that takes the dependencies involved in individual tasks by performing a topological sort. I suspect that if you tried to solve this problem without having the concept of a directed graph then you'd probably end up trying to track the dependencies all the way through the build with fiddly book-keeping code (and then finding that your handling of cyclic dependencies was less than elegant).
Clearly most programmers don't write their own optimizing compilers or build systems, so I'll pick an example from my own experience. There is a company that provides road data for satnav systems. They wanted automatic integrity checks on their data, one of which was that the network should all be connected up, i.e. it should be possible to get to anywhere from any starting point. Checking the data by trying to find routes between all pairs of positions would be impractical. However, it is possible to derive a directed graph from the road network data (in such a way as it encodes stuff like turning restrictions, etc) such that the problem is reduced to finding the strongly connected components of the graph - a standard graph-theoretic concept which is solved by an efficient algorithm.
I've been taking a course on software testing, and 3 of the lectures were dedicated to reviewing discrete mathematics, in relation to testing. Thinking about test plans in those terms seems to really help make testing more effective.
Understanding of set theory in particular is especially important for database development.
I'm sure there are numerous other applications, but those are two that come to mind here.
Just example of one of many many...
In build systems it's popular to use topological sorting of jobs to do.
By build system I mean any system where we have to manage jobs with dependency relation.
It can be compiling program, generating document, building building, organizing conference - so there is application in task management tools, collaboration tools etc.
I believe testing itself properly procedes from modus tollens, a concept of propositional logic (and hence discrete math), modus tollens being:
P=>Q. !Q, therefore !P.
If you plug in "If the feature is working properly, the test will pass" for P=>Q, and then take !Q as given ("the test did not pass"), then, if all these statements are factually correct, you have a valid, sound basis for returning the feature for a fix. By contrast, many, maybe most testers operate by the principle:
"If the program is working properly, the test will pass. The test passed, therefore the program is working properly."
This can be written as: P=>Q. Q, therefore P.
But this is the fallacy of "affirming the consequent" and does not show what the tester believes it shows. That is, they mistakenly believe that the feature has been "validated" and can be shipped. When Q is given, P may in fact either be true or it may be untrue for P=>Q, and this can be shown with a truth table.
Modus tollens is core to Karl Popper's notion of science as falsification, and testing should proceed in much the same way. We're attempting to falsify the claim that the feature always works under every explicit and implicit circumstance, rather than attempting to verify that it works in the narrow sense that it can work in some proscribed way.

Essential techniques for pinpointing missing requirements?

An initial draft of requirements specification has been completed and now it is time to take stock of requirements, review the specification. Part of this process is to make sure that there are no sizeable gaps in the specification. Needless to say that the gaps lead to highly inaccurate estimates, inevitable scope creep later in the project and ultimately to a death march.
What are the good, efficient techniques for pinpointing missing and implicit requirements?
This question is about practical techiniques, not general advice, principles or guidelines.
Missing requirements is anything crucial for completeness of the product or service but not thought of or forgotten about,
Implicit requirements are something that users or customers naturally assume is going to be a standard part of the software without having to be explicitly asked for.
I am happy to re-visit accepted answer, as long as someone submits better, more comprehensive solution.
Continued, frequent, frank, and two-way communication with the customer strikes me as the main 'technique' as far as I'm concerned.
It depends.
It depends on whether you're being paid to deliver what you said you'd deliver or to deliver high quality software to the client.
If the former, simply eliminate ambiguity from the specifications and then build what you agreed to. Try to stay away from anything not measurable (like "fast", "cool", "snappy", etc...).
If the latter, what Galwegian said + time or simply cut everything not absolutely drop-dead critical and build that as quickly as you can. Production has a remarkable way of illuminating what you missed in Analysis.
evaluate the lifecycle of the elements of the model with respect to a generic/overall model such as
acquisition --> stewardship --> disposal
do you know where every entity comes from and how you're going to get it into your system?
do you know where every entity, once acquired, will reside, and for how long?
do you know what to do with each entity when it is no longer needed?
for a more fine-grained analysis of the lifecycle of the entities in the spec, make a CRUDE matrix for the major entities in the requirements; this is a matrix with the operations/applications as the rows and the entities as the columns. In each cell, put a C if the application Creates the entity, R for Reads, U for Updates, D for Deletes, or E for "Edits"; 'E' encompasses C,R,U, and D (most 'master table maintenance' apps will be Es). Then check each column for C,R,U, and D (or E); if one is missing (except E), figure out if it is needed. The rows and columns of the matrix can be rearranged (manually or using affinity analysis) to form cohesive groups of entities and applications which generally correspond to subsystems; this may assist with physical system distribution later.
It is also useful to add a "User" entity column to the CRUDE matrix and specify for each application (or feature or functional area or whatever you want to call the processing/behavioral aspects of the requirements) whether it takes Input from the user, produces Output for the user, or Interacts with the user (I use I, O, and N for this, and always make the User the first column). This helps identify where user-interfaces for data-entry and reports will be required.
the goal is to check the completeness of the specification; the techniques above are useful to check to see if the life-cycle of the entities are 'closed' with respect to the entities and applications identified
Here's how you find the missing requirements.
Break the requirements down into tiny little increments. Really small. Something that can be built in two weeks or less. You'll find a lot of gaps.
Prioritize those into what would be best to have first, what's next down to what doesn't really matter very much. You'll find that some of the gap-fillers didn't matter. You'll also find that some of the original "requirements" are merely desirable.
Debate the differences of opinion as to what's most important to the end users and why. Two users will have three opinions. You'll find that some users have no clue, and none of their "requirements" are required. You'll find that some people have no spine, and things they aren't brave enough to say out loud are "required".
Get a consensus on the top two or three only. Don't argue out every nuance. It isn't possible to envision software. It isn't possible for anyone to envision what software will be like and how they will use it. Most people's "requirements" are descriptions of how the struggle to work around the inadequate business processes they're stuck with today.
Build the highest-priority, most important part first. Give it to users.
GOTO 1 and repeat the process.
"Wait," you say, "What about the overall budget?" What about it? You can never know the overall budget. Do the following.
Look at each increment defined in step 1. Provide a price-per-increment. In priority order. That way someone can pick as much or as little as they want. There's no large, scary "Big Budgetary Estimate With A Lot Of Zeroes". It's all negotiable.
I have been using a modeling methodology called Behavior Engineering (bE) that uses the original specification text to create the resulting model when you have the model it is easier to identify missing or incomplete sections of the requirements.
I have used the methodolgy on about six projects so far ranging from less than a houndred requirements to over 1300 requirements. If you want to know more I would suggest going to www.behaviorengineering.org there some really good papers regarding the methodology.
The company I work for has created a tool to perform the modeling. The work rate to actually create the model is about 5 requirements for a novice and an expert about 13 requirements an hour. The cool thing about the methodolgy is you don't need to know really anything about the domain the specification is written for. Using just the user text such as nouns and verbs the modeller will find gaps in the model in a very short period of time.
I hope this helps
Michael Larsen
How about building a prototype?
While reading tons of literature about software requirements, I found these two interesting books:
Problem Frames: Analysing & Structuring Software Development Problems by Michael Jackson (not a singer! :-).
Practical Software Requirements: A Manual of Content and Style by Bendjamen Kovitz.
These two authors really stand out from the crowd because, in my humble opinion, they are making a really good attempt to turn development of requirements into a very systematic process - more like engineering than art or black magic. In particular, Michael Jackson's definition of what requirements really are - I think it is the cleanest and most precise that I've ever seen.
I wouldn't do a good service to these authors trying to describe their aproach in a short posting here. So I am not going to do that. But I will try to explain, why their approach seems to be extremely relevant to your question: it allows you to boil down most (not all, but most!) of you requirements development work to processing a bunch of check-lists* telling you what requirements you have to define to cover all important aspects of the entire customer's problem. In other words, this approach is supposed to minimize the risk of missing important requirements (including those that often remain implicit).
I know it may sound like magic, but it isn't. It still takes a substantial mental effort to come to those "magic" check-lists: you have to articulate the customer's problem first, then analyze it thoroughly, and finally dissect it into so-called "problem frames" (which come with those magic check-lists only when they closely match a few typical problem frames defined by authors). Like I said, this approach does not promise to make everything simple. But it definitely promises to make requirements development process as systematic as possible.
If requirements development in your current project is already quite far from the very beginning, it may not be feasible to try to apply the Problem Frames Approach at this point (although it greatly depends on how your current requirements are organized). Still, I highly recommend to read those two books - they contain a lot of wisdom that you may still be able to apply to the current project.
My last important notes about these books:
As far as I understand, Mr. Jackson is the original author of the idea of "problem frames". His book is quite academic and theoretical, but it is very, very readable and even entertaining.
Mr. Kovitz' book tries to demonstrate how Mr. Jackson ideas can be applied in real practice. It also contains tons of useful information on writing and organizing the actual requirements and requirements documents.
You can probably start from the Kovitz' book (and refer to Mr. Jackson's book only if you really need to dig deeper on the theoretical side). But I am sure that, at the end of the day, you should read both books, and you won't regret that. :-)
HTH...
I agree with Galwegian. The technique described is far more efficient than the "wait for customer to yell at us" approach.