Netbeans 7.4 introduces "10 lines max" per method rule. Where does this rule come from? [closed] - netbeans

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
NetBeans 7.4 beta is currently available for public download, and it introduces a weird warning rule by default:
Method length is 16 lines (10 allowed)
My question is: Is this an accepted code convention rule, that can be proven somehow, somewhere ? NetBeans support/devs say it's not a bug, but don't give a statement why they only allow 10 lines, and where exactly this rule has its origin.

You can change the maximum method/function length warning behavior in NetBeans options (it's under Tools->Options->Editor, in the Hints tab, under "Too Many Lines" section in the checkboxes list).
There you can set the maximum number of lines, how you would like to be warned, etc.
I believe that the rule was introduced by NetBeans developers because when working in teams, the automated tools that QAs use to "inspect" code flag long method declarations/functions bodies. Unfortunately, the use of automated tools by "code analysts" is on the rise, whilst their understanding of the reasons behind that are still limited. I do not say that your functions should be hundreds of lines long - that's just plain wrong, but a hard-coded number as a coding law - come on!

The "10 lines rule" has to do with enforcing test-driven development. The theory is that any method that has more than ten lines can be better broken down into units that are testable. it holds up in theory, but in practice a warning like this is more annoying than helpful.

I think there is not a convention about that, and it's very hard to make small functions in particular working in big projects.
I feel that the problem in NetBeans (or the rule) is counting lines with just one bracket or documentation.
This article gives him opinion about write functions with 5-15 lines.

I always disable this warning, as well as the warning about too many nested blocks. I understand the idea around not having large methods but a LOT of the time it's just not practical, and as someone else mentioned if you keep splitting your code into arbitrary functions just to appease the IDE you end up with spaghetti code jumping all over the place, refactoring becomes a huge problem later on as well.
Same as the line length limit warning, maybe a line 50 characters long made you scroll sideways in 1985, but today we have larger monitors (in color now as well!). I've seen people mutilate a line of code by shortening variable names so that it fits within the limit, turning a perfectly readable line of code into an indecipherable mess just so it fits within the limit.
Personally I think those three rules together have caused more garbage spaghetti code than helped create readable / testable code.

I think there is no such rule. I always thought a good convention would be no more lines of code in a class than one can read without scrolling. 10 lines seems not very much for me but in general it's for overview purposes and easier testing..

Related

How can i beautify my code in Matlab (tabs, deleting unnecessary spaces etc.)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
i'm looking to find a way to beautify my code in MATLAB. I'm talking about tabs, deleting unnecessary spaces etc., the way Eclipse does it with Ctrl+Shift+F
The smart indentation (ctrl + I) is probably all you need (as #Matteo V and #Cris Luengo already mentioned).
However, there are a few other neat tricks that you might want to have a look at if you are really into code development:
Well, first have a look at the Improve Code Readability site of MATLAB. You could use the Apply smart indenting while typing option in the Preferences > MATLAB > Editor/Debugger > Language > Indenting section (it should be turned on per default but I like the Indent all functions setting). There are a bunch of other settings that you may want to explore
If you dig yourself deeper in the MATLAB IDE, you will notice that you can adjust almost everything to your preferences, but the way is not always documented in the web.... however, the local documentation (call doc) contains the info you may be looking for, see this blog-post
I am not aware of an automatic detection of double spaces or similar but you might end up writing your own little callback-function. Most languages ignore this anyway (perhaps except for Python). Code readability is usually a topic that the programmer(s) should care about... and not the machines ;)
Further tips:
respect the Right-hand text limit, which is the vertical gray line in your editor and shall indicate how many characters a single line of code should have as maximum. If it is a comment, wrap it. If it is an expression, try to outsource some commands to a dedicated variable
use equally long variable names. (There is no style guide as in Python, which says that you should use normal words and underscores etc) E.g. if you have two variables describing a commanded and a measured velocity, you could call them v_cmd and v_act and your code perfectly aligns if you apply the same manipulations to both variables ;)
use section. With %% (the space is important) at the beginning of a line in the editor, you create a section (you'll note the slight yellow background color and the bold writing that follows this command). It is convenient to structure your code. You can even run entire sections Editor-Tab > Run > Run section
Although there are programmers claiming that a good code speaks for itself (and therefore doesn't need any comments), to my experience writing comments has never been a bad idea. It improves the readability of your code
The answer might have been a bit elaborate for such an innocent question ... oO

Is the ending semicolon important [duplicate]

This question already has answers here:
When should I use semicolons in SQL Server?
(13 answers)
Closed 8 years ago.
Hello People more knowledgeable than me,
I'm taking some online courses for SQL and I am curious about something. With some instructors they draft script and don't seem to be concerned about ending simple commands with a ; however, other instructors seem to religiously add the semicolon at all times.
I'm just wondering, how important is the semicolon, should it be something that is always part of your script or does it not matter?
I know it's a pretty simple question, but the intro classes don't really define exactly why it's needed and since I'm seeing it used differently... I just want to make sure I understand.
Thank you!
Terminating semi-colons will be required in some future version of SQL Server.
Although it's not currently required, it's not a bad habit to get into.
As far as I know, I neglect semi-colons all too much, and my scripts nearly never break. So my best guess is no.
Still makes the code more readable since you do add a layer of seperation in your code.
Oh, you must use them at CTEs though which aren't first in batch

Is simplified semantics for the 'blame' command a good thing?

I'm working on a new weave-based data structure for storing version control history. This will undoubtedly cause some religious wars about whether it's The Right Way Of Doing Things when it comes out, but that isn't my question right now.
My question has to do with what output blame should give. When a line of code has been added, removed, and merged into itself a number of times, it isn't always clear what revision should get blame for it. Notably this means that when a section of code is deleted, all records of it having been there is gone, and there is no blame for the removal. Everyone I've gone over this issue with has said that trying to do better simply isn't worth it. Sometimes people put in the hack that the line after the section which got deleted has its blame changed from whatever it actually was to the revision when the section got deleted. Presumably if the section is at the end then the last line get its blame changed, and if the file winds up empty then the blame really does disappear into the aether, because there's literally nowhere left to put blame information. For various technical reasons I won't be using this hack, but assume that continuing but with this completely undocumented but de facto standard practice will be uncontroversial (but feel free to flame me and get it out of your system).
Moving on to my actual question. Usually in blame for each line you look at the complete history of where it was added and removed in the history and using three-way merge (or, in the case of criss-cross merges, random bullshit) and based on the relationships between those you determine whether the line should have been there based on its history, and if it shouldn't but is then you mark it as new with the current revision. In the case where a line occurs in multiple ancestors with different blames then it picks which one to inherit arbitrarily. Again, I assume that continuing with this completely undocumented but de facto standard practice will be uncontroversial.
Where my new system diverges is that rather than doing a complicated calculation of whether a given line should be in the current revision based on a complex calculation of the whole history, it simply looks at the immediate ancestors, and if the line is in any of them it picks an arbitrary one to inherit the blame from. I'm making this change for largely technical reasons (and it's entirely possible that other blame implementations do the same thing, for similar technical reasons and a lack of caring) but after thinking about it a bit part of me actually prefers the new behavior as being more intuitive and predictable than the old one. What does everybody think?
I actually wrote a one of the blame implementations out there (Subversion's current one I believe, unless someone replaced it in the past year or two). I helped with some others as well.
At least most implementations of blame don't do what you describe:
Usually in blame for each line you look at the complete history of where it was added and removed in the history and using three way merge (or, in the case of criss-cross merges, random bullshit) and based on the relationships between those you determine whether the line should have been there based on its history, and if it shouldn't but is then you mark it as new with the current revision. In the case where a line occurs in multiple ancestors with different blames then it picks which one to inherit arbitrarily. Again, I assume that continuing with this completely undocumented but de facto standard practice will be uncontroversial.
Actually, most blames are significantly less complex than this and don't bother trying to use the relationships at all, but they just walk parents in some arbitrary order, using simple delta structures (usually the same internal structure whatever diff algorithm they have uses before it turns it into textual output) to see if the chunk changed, and if so, blame it, and mark that line as done.
For example, Mercurial just does an iterative depth first search until all lines are blamed. It doesn't try to take into account whether the relationships make it unlikely it blamed the right one.
Git does do something a bit more complicated, but still, not quite like you describe.
Subversion does what Mercurial does, but the history graph is very simple, so it's even easier.
In turn, what you are suggesting is, in fact, what all of them really do:
Pick an arbitrary ancestor and follow that path down the rabbit hole until it's done, and if it doesn't cause you to have blamed all the lines, arbitrarily pick the next ancestor, continue until all blame is assigned.
On a personal level, I prefer your simplified option.
Reason: Blame isn't used very much anyway.
So I don't see a point in wasting a lot of time doing a comprehensive implementation of it.
It's true. Blame has largely turned out to be one of those "pot of gold at the end of the rainbow" features. It looked really cool from those of us standing on the ground, dreaming about a day when we could just click on a file and see who wrote which lines of code. But now that it's widely implemented, most of us have come to realize that it actually isn't very helpful. Check the activity on the blame tag here on Stack Overflow. It is underwhemingly desolate.
I have run across dozens of "blame-worthy" scenarios in recent months alone, and in most cases I have attempted to use blame first, and found it either cumbersome or utterly unhelpful. Instead, I found the information I needed by doing a simple filtered changelog on the file in question. In some cases, I could have found the information using Blame as well, had I been persistent, but it would have taken much longer.
The main problem is code formatting changes. The first-tier blame for almost everything was listed as... me! Why? Because I'm the one responsible for fixing newlines and tabs, re-sorting function order, splitting functions into separate utility modules, fixing comment typos, and improving or simplifying code flow. And if it wasn't me, someone else had done a whitespace or block-move somewhere along-the-way as well. In order to get a meaningful blame on anything dating back to a time before I can already remember without the help of blame, I had to roll back revisions and re-blame. And re-blame again. And again.
So in order for a blame to actually be a useful time saver for more than the most lucky of situations, the blame has to be able to heuristicly make its way past newline, whitespace, and ideally block copy/move changes. That sounds like a very tall order, especially when scouring the changelog for a single file, most of the time, it won't yield many diffs anyway and you can just sift through by hand fairly quickly. (The notable exception being, perhaps, badly engineered source trees where 90% of the code is stuffed in one or two ginormous files... but who these days in a collaborative coding environment does much of that anymore?).
Conclusion: Give it a bare-bones implementation of blame, because some people like to see "it can blame!" on the features list. And then move on to things that matter. Enjoy!
The line-merge algorithm is stupider than the developer. If they disagree, that just indicates that the merger is wrong rather than indicating a decision point. So, the simplified logic should actually be more correct.

Can aptitude for learning Programming paradigms be influenced by culture or native language's grammar? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
It is well known that different people have different aptitudes regarding various programming paradigms (e.g. some people have trouble learning non-procedural, especially functional languages. Some people have trouble understanding pointers - see Joel Spolsky's blog for musings on that. Some people have trouble grasping recursion).
I was recently reading about a study that looked at how the grammar of someone's native language affected their speed of learning math. Can't find that article now but a quick googling found this reference.
That led me to wondering whether someone's native culture or first language might affect their aptitude towards various programming paradigms. I'm more curious about positive influences - e.g. some trait that make it easier/faster for someone to learn a particular paradigm, for example native language grammar being very recursion-oriented.
To be clear, I'm looking for how culture/language grammare may affect the difference between aptitude of the same person towards various paradigms as opposed to how it affects overall aptitude towards programming between different persons.
Important: the only answers I'm interested in are either references to scientific studies, or personal observations from someone intimately familiar with a particular culture/language, including from their own experience.
E.g. I'm not interested in your opinion of how Chinese being your first language affects anything unless you speak Chinese or worked with extremely large set of Chinese-native programmers extensively.
I'm OK with your guesstimates not based on scientific studies, but please be sure to supply your reasoning about plausible causes of your observation.
I'm not interested in culture-bashing (any such commends will be deleted or flagged for deletion).
I'm also not particularly interested in culture-building - we all know Linus is from Finland and Tetris was written in Russia and Larry Wall is an American. Any culture/nation can produce a brilliant mind in any discipline. I'm interested in averages.
Disclaimer: I was a Cultural Anthropologist before I got into programming, so you know I'm going to be on a high horse, here.
Obviously, a person's history will have an impact on their aptitude for any particular task, but I think this has less to do with the structure or grammar of a person's language than it does with the particular material conditions of the culture in which that language is spoken.
For example, a pair of Anthropologists in the 60's went to various African communities and tested people's susceptibility to various optical illusions. Here is a classic one:
In this illusion, the bottom line looks longer, because the angled lines connecting it make it appear to be off in the distance.
These Anthropologists found that in many African cultures, the illusion doesn't work at all - people consider the lines to be the same length. By refining their study, they found that the only people who were susceptible to the illusion were people who had grown up in an urban environment. They hypothesized that the illusion did not work on people from remote jungle environments, because these people had little or no experience with right angles and seeing things at very long distances.
My point with this is that even if you successfully found a correlation between programmers' native languages and their abilities with certain aspects of programming, you couldn't be sure that the correlation wasn't spurious. For example, you might think that Asians tend to be bad drivers, and you might even be able to demonstrate this statistically. If you then concluded, however, that "bad driving" is some sort of fundamental characteristic of Asian-ness, you would be ignoring the fact that Asians are more likely to be from Asia, and thus to have had much less experience driving cars (or even being in cars) while growing up than Westerners (and especially Americans) have had.
With programming, we might think that a particular language inhibits programming ability, and not take note of the fact that the society in which that language is spoken has much less access to computers, and thus people growing up with that language appear to have less programming aptitude or ability to understand certain programming concepts.
In short, I wouldn't give much credence to the idea that language inhibits anyone's ability to understand anything in particular. The human mind is much too flexible and adaptable for that to be true.
This seems analogous to the Sapir-Whorf Hypothesis - that the facilities of a language affect the ease which which one can cogitate about certain subjects, or in the words of the Wikipedia article:
"The linguistic relativity principle (also known as the Sapir-Whorf Hypothesis) is the idea that the varying cultural concepts and categories inherent in different languages affect the cognitive classification of the experienced world in such a way that speakers of different languages think and behave differently because of it."
( http://en.wikipedia.org/wiki/Linguistic_relativity )
While there appears to be little definitive information here, the discussions appear to be relevant to the question, and perhaps worthy of further exploration.
Just a few random thoughts. I think the influence is generally very weak and can most of the time be neglected but they do exist and sometimes they can make us feel them.
In Chinese grammar, for example, we don't quite distinguish between plural and singular forms, but I wouldn't think we Chinese have any noticeable difficulty understanding the concepts of scalar and array in Perl. The reason might be this: although we generally don't need particular suffixes or changes in form to indicate whether something is singular or plural, we do have the concepts of plural and singular and we mostly depend upon the context to tell them apart. Grammar-wise, the context in Chinese may possibly be way more important than that in those languages belonging to indo-european family. We omit a lot of things sometimes when they have already been mentioned and sometimes when we just presume that these things can be implicitly well understood by the listener. In either case, we don't need those indefinite and definite articles (a, an, the) or those relative pronouns like, that, which and who, to indicate whether they're being mentioned for the first time or yet another time again. Maybe that's partially why I feel very comfortable with Perl's default variable "$". print; chomp; split; all act upon $, which has never ever been mentioned. But this is quite subjective.
I think the Chinese language is more characterized by implicitness and fuzziness than Indo-european languages. For example, We never ever pay attention to subject verb agreement and we never ever do verbal conjugation to denote tenses. This could mean that the Chinese are inclined use a not quite so logical mode of thinking. One of my teachers onced used an example to try to generalize (or maybe over-generalize)the difference between Chinese non-logical mode of thinking and American logical mode of thinking.
If the American version of quarrelling should be this:
“I can lick you.”
“No, you can’t.”
“Yes, I can.”
“No, you can’t.”
“I can.”
“you can’t.”
“Can!”
“Can’t!”
The Chinese version (translated in English) would be something like this:
I can lick you.
How dare you!
What if I dare?
Then you try.
Try? Hm, you wait and see.
Wait and see? I’m not afraid.
Not afraid? OK. You don’t run away.
Who runs away? Come on and lick
Well, I agree that there may be some differences between Chinese way of thinking and that of other countries but the example looks like a stereotype because the Chinese may easily switch to the use of the American version. Back to the question, I think the language and culture may indeed influence a programmer's learning process in one way or another but this influence is defninitely not decidingly noticeable. Maybe because of the culture you're exposed to makes you feel a little bit uncomfortable to get used to some notions in some programming language, recursion or whatever, but time will solve it.
I was recently reading about a study that looked at how the grammar of someone's native language affected their speed of learning math. ... Important: the only answers I'm interested in are either references to scientific studies, or personal observations from someone intimately familiar with a particular culture/language, including from their own experience.
I learned a lot of maths before I started programming (enough to count as "intimately familiar"), and IMO programming is relatively easy: more tangible.
Sometimes I've wondered whether it's beneficial to know more than one human language: if you only know one language, then you might think of the words "cat" and "dog" as being values, i.e. synonymous with cat and dog objects; but if you're fluent in more than one language, then "cat" and "dog" become pointers: because for example the French words "chat" and "chien" are referring/pointing to the same objects as "cat" and "dog", and so clearly there's a distinction between the word and the object.
It's disappointing that you post the question without linking to the article which inspired it. I thought of "reverse polish notation" and wondered whether that was at all the kind of differences in "grammar" that were considered in the original study.
The reference you cite seems to rest on the assumption that making it easier helps with learning. In my understanding, there is a countereffect: without enough challange, you're not learning enough.
There are theories/studies (anyone with a link?) that development of language created crucial pressure on expanding the cerebral cortex and thus "made us human". (in very darwinistic terms: more grey matter ==> better language capabilities ==> better teamwork ==> better survival as a group). So language complexity can't be all bad for learning.
(My only qualification is being an eager follower of The Frontal Cortex blog, so take this with a grain of salt.)
In german we have a strange ordering of numbers: 10^0 and 10^1 positions are switched, but others are normal, (e.g. 25 is 'five and twenty', 125 is 'one hundred five and twenty'). It's been claimed that this makes learning numbers harder, and thus german should adopt a more intuitive ordering.
I guess that it helps a lot with doing additions in your head - at least if you stay below 100 or 200 - You can first add the 10^0 position and already say it / write it down while taking any carry into account for the 10^1 position.
(That doesn't continue for 10^2, I guess that would be done in writing by the majority anyway)
Also: abstractions. There are languages where numbers aren't abstracted from objects, "two coconuts" and "two sabretooth tigers" don't share a common "two" word / concept. Such a language would probably be very bad for developing math skills. Here the abstraction (separating number and object) in language is important.
Generally, I'd say the language has a strong effect on shaping a developing mind, and I see no reason why this should not extend to culture.
Of course it's still open what would be the "right kind of complexity" - for what, and how particular language features affect general improvement vs. establishment of an elite (i.e. "sharpening the skills of the gifted, while hampering the rest").
Interesting Question, no doubt - looking forward to other replies.

Plural form of word "mutex" [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What is the correct plural form of the portmanteau mutex. Is it mutexes or mutices?
From a purely linguistics point of view, the correct usage is mutexes because the word mutex is not Latin in origin. Prescriptivists would wail in anguish if mutices were to enter regular usage.
The -ices usage (e.g., the plurals of index and vertex) is falling out of favor. Indexes and vertexes are both correct usage, for example.
Let their common usage decide...
GoogleFight
Everyone knows that the correct answer is Mutii.
Mutexes. It's correct in a de facto manner--- the vast majority of people (in my experience, certainly) call them mutexes, not mutices, and English is a language that's defined by use. :)
As mutex is short for "mutual exclusion", I would only imagine that "mutual exclusions" would become mutexes. Mutices would be confusing. Better to be unambiguous.
As a side note: it's not a portmanteau, or it would be a mutsion.
There's no official correct form because 'mutex' hasn't gained wide enough circulation to enter any of the major English dictionaries. Thus, the most correct term is whatever is used most by people. And I think that Google hits are a pretty good indicator of (relative) usage frequency, as great_lama has pointed out.
Other English nouns that end in -ex or -ix:
Affix
Annex
Apex
Appendix
Cervix
Circumflex
Complex
Cortex
Crucifix
Duplex
Helix
Ibex
Index
Infix
Latex
Matrix
Phoenix
Prefix
Postfix
Reflex
Remix
Suffix
Vertex
Vortex
And lots more less common words. If you look up these in the dictionary, you'll find that most of them have both plurals shown as acceptable. Several have only the -exes/-ixes form, but few or none (depending on the dictionary you use) have only the -ices form.
In conclusion, I believe mutexes to be the correct plural form of mutex.
Either/or. I've seen both (though mutexes is considerably more common).
Mutex is not in any real dictionary I know of, so there's no "official answer."
Index can be pluralized to indexes or indices, though, so it makes sense that mutex could follow suit.
Since the word apex can be pluralized as either apexes or apices, I'd say you can pronounce it either mutexes or mutices. Whatever suits you.
I think that the hysterical raisins (in this case the fact that "mutex" is a portmanteau) should not be given too much weight in resolving such issues.
Perhaps it would be more useful to consider similar words and their usage; reflex -> reflexes for example.
Or, use the simplest choice: most pluralizations in english use -s/-es (depending on whether last letter is a perceived vowel); in this case -es.
I guess I can't see any reason to use the alternative, except as some sort of tribute to Latin, once thought to be the noblest of all languages. :)
Maybe it is like sheep? Singular and plural?