When generating code, what language should you generate? - code-generation

I've worked on a number of products that make use of code generation. It seems to be the only way to achieve both a high degree of user-customizability and high execution speed.
The downside is that we are requiring users to install a compiler (primarily on MS Windows).
This has been an on-going headache, because vendors like MS keep obsoleting compilers, and some users tend to have more than one compiler installed.
We're considering using GNU C, and possibly C++, but even there, there are continual version issues.
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
Ideally there would be some way to produce generated code that would be flexible, run fast, and not expose us to the whims of third-party providers.
Maybe I'm overlooking something simple, like Java. Any ideas would be appreciated. Thanks.

If you're considering C and even assembler, take a look at LLVM first: http://llvm.org

I might be missing some context here, but could you just pin yourself to a specific version? E.g., .NET 2.0 can be installed side by side with .NET 1.1 and .NET 3.5, as well as other versions that will come out in the future. So as long as your code makes use of a specific version of a compiler, what's the problem?

I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
That would be called a compiler :)
Why don't you stick to C90?
I haven't heard much of severe violations of standards from gcc's side, if you don't use extensions.
And you can always distribute a certain version of gcc along with your product, say, 4.3.2, giving an option to users to use their own compiler at their own risk.
As long as all code is generated by you (i. e. you don't embed your instructions into other's code), there shouldn't be any problems in testing against this version and using it to compile your libraries.

If you want to generate assembly language code, you may take a look at asmjit.

One option would be to use a language/environment that provides access to the compiler in code; For example, here is a C# example.

Why not ship a GNU C compiler with your code generator? That way you have no version issues, and the client can constantly generate code that is usable.

It sounds like you're looking for LLVM.

Start here: The Code Generation conference

In the spirit of "might not be to late to add my 2 cents" as in #Alvin's answer's case, here is something I'd think about: if your application is meant to last for some years, it is going to face several changes in how applications and systems work.
For instance, let's say you were thinking about this 10 years ago. I was watching Dexter back then, but I guess you actually have memories of how things were at that time. From what I can tell, multithreading was not much of an issue to developers of 2000, and now it is. So Moore's law broke for them. Before that people didn't even care about what will happen in "Y2K".
Speaking of Moore's law, processors are indeed getting quite fast, so maybe certain optimizations won't be even that necessary. And possibly the array of optimizations will be much bigger, some processors are getting optimizations for several server-centric stuff (XML, cryptography, compression and regex! I am surprised such things can get done on a chip) and also spend less energy (which is probably very important for warfare hardware...).
My point being that focusing on what exist today as a platform for tomorrow is not a good idea. Make it work today, and surely it will work tomorrow (backward-compatibility is especially valued by Microsoft, Apple is not bad it seems and Linux is very liberal about making it work as you want).
There is, yes, one thing that you can do. Attach your technology to something that just won't (likely) die, such as Javascript. I'm serious, Javascript VMs are getting terribly efficient nowdays and are just going to get better, plus everyone loves it so it's not going to dissappear suddenly. If needing more efficiency/features, maybe target the CRL or JVM?
Also I believe multithreading will become more and more of an issue. I have a gut feeling the number of processor cores will have a Moore's law of their own. And architectures are more than likely to change, from the looks of the cloud buzz.
PS: In any case, I belive C optimizations of the past are still quite valid under modern compilers!

I would stick to that language that you use for generating that language. You can generate and compile Java code in Java, Python code in Python, C# in C#, and even Lisp in Lisp, etc.
But it is not clear whether such languages are sufficiently fast for you. For top speed I would choose to generate C++ and use GCC for compilation.

Why not use something like SpiderMonkey or Rhino (JavaScript support in Java or C++). You can export your objects to JavaScript namespaces, and your users don't have to compile anything.

Embed an interpreter for a language like Lua/Scheme into your program, and generate code in that language.

Related

How to connect SystemC model with SystemVerilog?

Say we have a SystemC model of decade counter and I want to verify SystemVerilog Counter RTL using SystemC model. How can we connect these two in SV/UVM based testbench so as to communicate between them.
Mentor developed a free package called UVMConnect that was developed specifically for the application you are asking about. See https://verificationacademy.com/topics/verification-methodology/uvm-connect. You will need a simulator that supports SystemVerilog and SystemC simulating together, like Questa.
If you're using QuestaSim I think UVM-connect from Mentor is the way to go. When I first used it(4 years ago) it was very buggy and gave the most cryptic segfault errors I've ever seen. But, with help from the Mentor support I managed to overcome them and get stuff done. It should be more stable now, but if you have problems with it don't hesitate to contact Mentor support. They are very responsive.
However, if you're using Cadence tools and/or the e language I think that UVM-ML from Cadence is a much more comprehensive solution. It allows you to connect components written in any combination of languages(SV-SC, SV-e, SC-e) and it has nicer documentation and examples. I understand it's also compatible with all simulators now. You can find it here : http://forums.accellera.org/files/file/65-uvm-ml-open-architecture/
Not sure what Synopsis folks recommend for their tool suite. Maybe someone who used them can offer more information on this. But I'm guessing that both UVM-ML and UVM-Connect could work since their makers claim that they are portable.
And lastly, if you're planning to use SystemC as a verification language(very unlikely but just for the sake of diversity) there is something called UVM-SystemC which is basically a clone of SV-UVM written in C++/SystemC. It's currently in its alpha release and it lacks many features(register modeling, constrained randomization, coverage collection, etc.). It feels a lot like SV-UVM and I think it's a nice toy to play with in your spare time if you can't afford a commercial simulator license. You can find it here http://accellera.org/images/downloads/drafts-review/uvm-systemc-1.0-alpha1.tar.gz

porting java code to contiki-os

i am using contiki-os to simulate some motes which would have semantic capabilities. As the contiki-os (erbium) is written in C but our semantic libraries are written in java.
can anyone here guide me if it is possible to exploit these libraries in erbium or contiki-os. or i have to rewrite everything from scratch ?
update
just a minor update to the question. is it possible to use java code on the cooja simulator?
Cooja is indeed written in Java.
You can extend or modify Cooja if you need.
You can find out more about Cooja on the Contiki wiki as well as in numerous papres by Fredrik Ă–sterlind. Perhaps you should also take a look at Fredrik's PhD thesis "Improving Low-Power Wireless Protocols with Timing-Accurate Simulation", which is mostly about Cooja.
You might be able to use something like this:
http://www.codemesh.com/products/junction/
It appears to have a code generator that takes a java bytecode and create C code from it... but it might also need a runtime library that's platform specific.
With all that in mind, I don't think you will be successful. Most of the platforms are nearly out of space and/or flash by the time you are working with Erbuim; I doubt you'll have resources to process java code somehow.
And if you did get some success from this approach it would probably take a lot of time and effort to do so. With that time and effort you probably could have written the C code to do what you need instead.

Are there any disadvantages of using C# 3.0 features?

I like C# 3.0 features especially lambda expressions, auto implemented properties or in suitable cases also implicitly typed local variables (var keyword), but when my boss revealed that I am using them, he asked me not to use any C# 3.0 features in work. I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Since we are targeting .NET Framework 3.5, I cannot see any reason for these restrictions. In my opinion, maybe the only disadvantage is that my few co-workers and the boss (also a programmer) would have to learn some basics of C# 3.0 which should not be difficult. What do you think about it? Is my boss right and am I missing something? Are there any good reasons for such a restriction in a development company where C# is a main programming language?
I have had a similar experience (asked not to use Generics, because the may be confusing to my colleagues).
The fact is, that we now use generics and non of my colleagues are having a problem with them. They may not have grasped how to create generic classes, but they sure do understand how to use them.
My opinion on that is that any developer can learn how to use these language features. They may seem advanced at first but as people get used to them the shock of newness lessens.
The main argument for using these features (or any new language features) is that this is a simple and easy way to help my colleagues advance their skills, rather than stagnating.
As for your particular problem - not using lambdas. Lots of the updates to the BCL have overloads that take delegates as parameters - these are in many cases most easily expressed as lambdas, not using them this way is ignoring some of the new and updated uses of the BCL.
In regards to the issues with your peers not being able to learn lambdas - I found that Jon Skeets C# in depth deals with how they evolved from delegates in a manner that was easy to follow and real eye opener. I would recommend you get a copy for your boss and colleagues.
You boss is going to need to understand that language (and other) improvements are designed to give developers more capabilities, and make them more efficient in completing the task at hand, and that if he is not going to allow them for unknown reasons then:
The development team isn't producing at its greatest potential.
The company isn't benefiting from increased efficiency/productivity.
like others have said developers aren't worth their salt if they can't keep up with some of the latest improvements in the language that they are using on a daily basis. I suspect your boss hasn't done much coding lately and it is his inability to understand the latest language improvements that has motivated this decision.
I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Presumably roughly translates to your boss meaning...
These features are confusing for me, and I don't find them useful because I don't understand them.
Which is fairly symptomatic of the Blub paradox (well, or just sheer laziness). Either way there's no merit in what he's saying, and you should start looking for another job if he continues down that road.
If the project is strictly C# 3+ from now on, then you would not break the build by including these items. However, before using them you should be aware of the following:
You can't use them if the project lead gets to make the decision and votes no.
Other than that, you should use them where it makes the code significantly easier to maintain.
You should not use them in ways that are confusing, or unnecessary in the sense that they do not significantly improve the maintainability of the code. This does mean you should not use them where the code is effectively the same or barely improved.
If Microsoft didn't define the standard and these were features that they added to a non-Microsoft language, I would say your boss might have a point. However, since Microsoft defines the language and uses these very features in implementing significant parts of .NET 3.5 (and 4.0), I'd say that you'd be foolish to ignore them. You may not choose to use some of them -- var, for instance, may not be acceptable in all environments due to coding standards -- but a blanket policy of avoiding new features seems unreasonable.
The trickier bit is when should you start using new features, because they can be confusing and may delay development. In general, I choose to use new language features and platform elements on new projects. I often avoid using them on projects that are currently in development when the feature/framework enhancement comes out, deferring until the next project. On a long project, I might introduce them at a significant milestone if the amount of rearchitecting is small or the feature is worth the changes. Normally, I'd wait until the project is due for significant changes anyway and then evaluate if refactoring to newer features is warranted.
The jury is still out on the long term consequences of some features, but if their main rationale is 'it is confusing to other developers' or something similar than I would be concerned about the quality of the talent.
I like C# 3.0 features especially
lambda expressions, auto implemented
properties or in suitable cases also
implicitly typed local variables (var
keyword), but when my boss revealed
that I am using them, he asked me not
to use any C# 3.0 features in work. I
was told that these features are not
standard and confusing for most
developers and its usefulness is
doubtful.
He's got a point.
Following that line of thought, let's make a rule against generic collections since List<T> doesn't make any sense (angle brackets? wtf?).
While we're at it, let's eliminate all interfaces (when are you ever gonna need a class without any implementation?).
Hell, let's go ahead eliminate inheritance since its so tricky these days (is-a? has-a? can't we all just be friends?).
And use of recursion is grounds for dismissal (Foo() invokes Foo()? Surely you must be joking!).
Errrm... back to reality.
Its not that C# 3.0 features are confusion to programmers, its that the features are confusing to your boss. He's familiar with one technology and stubbornly refuses to part with it. You're about to enter the Twilight Zone Blub Paradox:
Programmers get very attached to their
favorite languages, and I don't want
to hurt anyone's feelings, so to
explain this point I'm going to use a
hypothetical language called Blub.
Blub falls right in the middle of the
abstractness continuum. It is not the
most powerful language, but it is more
powerful than Cobol or machine
language.
And in fact, our hypothetical Blub
programmer wouldn't use either of
them. Of course he wouldn't program in
machine language. That's what
compilers are for. And as for Cobol,
he doesn't know how anyone can get
anything done with it. It doesn't even
have x (Blub feature of your choice).
As long as our hypothetical Blub
programmer is looking down the power
continuum, he knows he's looking down.
Languages less powerful than Blub are
obviously less powerful, because
they're missing some feature he's used
to. But when our hypothetical Blub
programmer looks in the other
direction, up the power continuum, he
doesn't realize he's looking up. What
he sees are merely weird languages. He
probably considers them about
equivalent in power to Blub, but with
all this other hairy stuff thrown in
as well. Blub is good enough for him,
because he thinks in Blub.
When we switch to the point of view of
a programmer using any of the
languages higher up the power
continuum, however, we find that he in
turn looks down upon Blub. How can you
get anything done in Blub? It doesn't
even have y.
C# 3.0 isn't hard. Sure you can abuse it, but it isn't hard or confusing to any programmer with more than week of C# 3.0 experience. Your boss's skills have just fallen behind and he wants to bring the rest of the team down to his level. DON'T LET HIM!
Continue using anonymous funcs, the var keyword, auto-properties, and what have you to your hearts content. You won't lose your job over it. If he gets pissy about it, laugh it off.
Like it or not, if you plan on using LINQ in any situation, you're going to have to utilize some of the C# 3.0 language specs.
Your boss is going to have to warm up to them if he wants to utilize the feature sets you get from 3.5, which are numerous and worth your time investing in.
Also, from my experience in leading teams, I've found that using the 3.0 specs actually has helped devs readability and understanding of the code base. There's about a weeks worth of time that is spent by the dev trying to understand what the syntax means, but once they get it they much prefer the new way over the old way.
Perhaps you can do a presentation once a week on each feature to everyone and get some of the developers on your side to help convince management of the benefits.
I recently moved from a bleeding edge C# house to a C# house that was running mostly on dot.Net 1.1 and some 2.0 projects, using mostly only 1.1 features. Luckily management stay away from the code. Most of the developers love all the new features in the newer frameworks, they just don't have the time or inclination to figure them out by themselves. Once I managed to show them how they can make their own lives easier they started using them by themselves and we have migrated several projects to gain the new language features and better tool advantages.
Some people are just afraid of change, because maybe you'll make them all look stupid using fancy new technologies. Could also be that your boss doesn't want the team learning new things instead of getting work done the old fasioned way.
The var keyword can certainly be abused, but in most cases reduces redundant code. LINQ is the main thing you want from .Net 3.5 because of the huge time saving in the amount of code you have to write. Your boss should be encouraging you to use it. Also the base class libraries now take delegates are parameters, so you will be limiting yourself a lot by not using them. Lambda's are just some fancy syntactic sugar to make delegates cleaner.
I would refer you to Effectively Integrating into Software Development Teams and Leading by Example. Two really great articles on how to deal with teams that are afraid of change.

Modula-2 Developer?

Guess no new project is implemented in languages like Modula, Ada , Oberon .. anymore (right?). But still there are legacy systems floating around, popping out here and there looking for their creators. They cant find them because they might be retired sitting at a beach somewhere enjoying themselves.
Serious:
1) I am wondering if there are still active (experienced) Modula programmers around ?
2) Anyone experience with porting Modula code to a new hardware generation ?
3) Does anyone know about a tool that can re-engineer, means map Procedures and Mod-files in a graphical way. These tools are available for eg. C programs.
Sure, Modula Syntax is not that breathtaking in comparison to todays .net and Java API's with 1000's of methods, but if someone drop about 100.000 lines of almost undocumented sourcode at you (nicely mixed with some 8000 lines assembler), you better know if you better reject it. I have this request and I am very resistant. (Option: port and keep modula source or migrate to other language in 9 months!)
cheers
1) I am wondering if there are still active (experienced) Modula programmers around ?
There are plenty of them, but you have to do a bit of web search to find them. If you search for "Curriculum Vitae" (or "Resume") and "Modula-2" there should be plenty of hits. Also, anybody who has experience in Oberon, Pascal or Delphi will be able to handle Modula-2.
Also there are active Modula-2 projects, most notably:
GNU Modula-2 at http://www.nongnu.org/gm2
Modula-2 R10 at http://modula-2.net/m2r10
Modula2JCC at http://code.google.com/p/modula2jcc
Modulipse at http://modulipse.sourceforge.net
Schwarzer Kaffee http://sourceforge.net/projects/schwarzerkaffee
2) Anyone experience with porting Modula code to a new hardware generation ?
Ask on the GNU Modula-2 mailing list. Many GNU Modula-2 users have Modula-2 code from 16-bit DOS systems they like to port to modern platforms. The GNU Modula-2 website lists this as one important motivation for GM2. The GM2 mailing list is at:
http://lists.nongnu.org/mailman/listinfo/gm2
There is also the Modula-2 Usenet news group, you can reach it via the Google interface at
http://groups.google.com/group/comp.lang.modula2
Last but not least, there is a Modula-2 IRC channel at Freenode
irc://irc.freenode.net/#modula-2
3) How to assess 100.000 lines of Modula-2 source code
"if someone drop about 100.000 lines of almost undocumented sourcode at you (nicely mixed with some 8000 lines assembler), you better know if you better reject it. I have this request and I am very resistant. (Option: port and keep modula source or migrate to other language in 9 months!)"
You may want to contact Rick Sutcliffe, a well known Modula-2 scholar and book author who is also the maintainer of the Modula-2 FAQ in which he states that he does get hired to do consulting work for assessing Modula-2 source code in company take-over situations. It seems to me that your situation might be similar enough to justify hiring an expert to establish the value of the software that is offered to you.
The Modula-2 FAQ is at http://faq.modula2.net
1) I am wondering if there are still active (experienced) Modula programmers around ?
Yes, I'm one. But I already have a job :-)
2) Anyone experience with porting Modula code to a new hardware generation ?
Not clear if you meant porting code or porting a compiler. Porting Wirth's Modula-2 compiler (or Oberon compiler) should be easy. Ada and Modula-3 are another story.
3) Does anyone know about a tool that can re-engineer, means map Procedures and Mod-files in a graphical way. These tools are available for eg. C programs.
I don't understand the question. If you are looking to visualize the import graph of a Modula-2 program, you could easily write something to emit dot. Visualizing call graphs is another story.
Here's my bottom line on Modula-2 and Oberon:
Any C programmer worth his or her salt can quickly learn enough Modula-2 to maintain a large legacy application. Oberon's another story; its model of exported names and type extension is not like the object models found in other OO languages.
Wirth's genius as a language designer was to make things easy for the person writing the compiler. So if you need tools, any good compiler writer can produce them. Wirth's compiler should be available and easy to port.
Ada does not deserve to be mentioned in the same breath with Modula-2 and Oberon.
Ada is still an very active language. I use it in my own research since 1995 and in my lectures since last year at a university.
I myself don't know much of Modula, however I worked at a research center in Brazil that had a packet switching network project (Compac) that was entirely created in Modula-2. If I'm not mistaken they even developed the compiler/linker themselves. Since I don't feel at liberty to point you to specific persons, I would suggest you do a google search for "compac" and "cpqd" and I can pretty much guarantee you will find names of people involved in it. It should come as no surprise that references to it are quite old, from late 80's.
Modula-2 is architecturally not that dissimilar to C. A programmer familiar with C should have little trouble figuring out Modula-2. Given that your application has a significant body of assembler code then you will need someone with low-level skills anyway.
IIRC Modula-2's grammar is LL(1) or nearly so, so writing a parser to generate call graphs for a Modula-2 code base is not beyond the wit of man. Graphviz is your friend if you want a quick and easy way of visualising the call graphs. Again, this suggests that you're up for employing a 'real programmer' to do the porting work.
If you need a reasonably viable Modula-2 compiler, you could look at the Amsterdam Compiler Kit which does have a competent Modula-2 compiler that can target a wide variety of platforms, although it doesn't support Win32 IIRC.
I would suggest that documenting and porting the existing Modula-2 code base is probably easier than attempting to re-write it in C. However, if you need to move to a different processor architecture then you will have to re-write the assembly language bits anyway. This does rather change the value proposition of porting.
If it is you doing the porting then you might consider doing it in two steps.
Arrange with the customer to do a utility to generate the call graph and give them a feasibility study recommending what to do and some estimate of the scope.
Do the port, either porting the code base or re-writing it. Bear in mind that you may not need a low level language for the entire code base if you're running it on a modern computer. You may be able to do it in a mixture of (say) Python and C with less effort than would have been needed for a rewrite purely in C.
Yup.
I realise that you asked this question quite some time ago but I also know that projects that nobody like to handle get kind of delayed...
I built several large systems in Modula-2 over a span of ten years and have this insane habit of taking on impossible tasks.
I have not touched it for about ten years but am absolutely certain that I can port your system for you to almost any other platform. Why not get in touch with me if you are still interested?
Oh yeah - better still, we are both in Singapore :-)
ADW Modula-2 has now been released as freeware. http://www.modula2.org/adwm2/ Since it's free and supports 32 & 64-bit Windows applications (and I know Modula-2), I've picked it up and am using it for small utility work that I want to be a 64-bit Windows binary (most of my work is in Java and .Net, which are fine but sometimes a pure binary is best. I use MASM32 for 32-bit binary Windows apps already).
edit
There's also a project in the works (still very early on, not yet usable as of the date of this edit) now to compile Modula-2 on the JVM (with a transpile to Java option). https://github.com/m2sf/m2j

What inherited code has impressed or inspired you?

I've heard a ton of complaining over the years about inherited projects that us developers have to work with. The WTF site has tons of examples of code that make me actually mutter under my breath "WTF?"
But have any of you actually been presented with code that made you go, "Holy crap this was well thought out!" or "Wow, I never thought of that!"
What inherited code have you had to work with that made you smile and why?
Long ago, I was responsible for the Turbo C/C++ run-time library. Tanj Bennett wrote the original 80x87 floating point emulator in 16-bit assembler. I hadn't looked closely at Tanj's code since it worked well and didn't require attention. But we were making the move to 32-bits and the task fell to me to stretch the emulator.
If programming could ever be said to have something in common with art this was it.
Tanj's core math functions managed to keep an 80-bit floating point temporary result in five 16-bit registers without having to save and restore them from memory. X86 assembly programmers will understand just what an accomplishment this was. Register space was scarce and keeping five registers as your temp while simultaneously doing complex math was a beautiful site to behold.
If it was only a matter of clever coding that would have been enough to qualify it as art but it was more than that. Tanj had carefully picked the underlying math algorithms that would be most suitable for keeping the temp in registers. The result was a blazing-fast floating point emulator which was an important selling point for many of our customers.
By the time the 386 came along most people who cared about floating-point performance weren't using an emulator but we had to support Intel's 386SX so the emulator needed an overhaul. I rewrote the instruction-decode logic and exception handling but left the core math functions completely untouched.
In my first job, I was amazed to discover a "safe ID" class in the codebase (c++), which was wrapping numerical IDs in a class templated with an empty tag class, that ensured that the compiler would complain if you tried for example to compare or assign a UserId into an OrderId.
Not only did I made sure that I had an equivalent Id class in all subsequent codebases I would be using, but it actually opened my eyes on what the compiler could do to guarantee correctness and help writing stronger code.
The code that impresses me the most, and which I try to emulate - is code that seems too simple and easy to understand.
It is damn difficult to write that kind of code. :-)
I have a funny story to tell here.
I was working on this Javaish application, filled with getters & setters that did nothing but get or set and interfaces and everything ever invented to make code unreadable. One day I stumbled upon some code which seemed very well crafted -- it was basically an algorithm implementation that looked very elegant = few lines of readable code, even though it respected every possible rule the project had to adhere to (it was checkstyled automatically).
I couldn't figure out who on the team could have written such code. I was dying to discuss with him and share thoughts. Thankfully, we had switched to subversion (from cvs) a few months earlier and I quickly ran am 'svn blame'. I loled all over the place, seeing my name next to the implementation.
I had heard stories about people not remembering code they wrote 6 months back, code that is a nightmare to maintain. I could not believe such a thing could happen: how can you forget code you wrote? Well, now I'm convinced it can happen. Thankfully the code was alright and easy to extend, so I've only experienced half of the story.
Some VB6 code by another programmer at my company I came across that handled the error conditions very well (whether it be deal with them directly or log them).
Along with some rather complex code that was well commented.
I know this will bring a lot of answers like,
"I've never find good code before I step in" and variations.
I think the real problem there is not that there isn't good coders or excellent projects out there, is that there's an excess of NIH syndrome and the fact that no body likes code from others. The latter is just because you have to make an intellectual effort to understand it, a much bigger effort than you need to understand you own code so that you dislike it (it's making you think and work after all).
Personally I can remember (as everyone I guess) some cases of really bad code but also I remember some pretty well documented, elegant code.
Currently, the project that most impressed me was a very potent, Dynamic Workflow Engine, not only by the simplicity but also for the way it is coded. I can remember some very clever snippets here and there, as well as a beautiful metaprogramming library based on a full IDL developed by some friends of mine (Aspl.es)
I inherited a large bunch of code that was SO well written I actually spent the $40 online to find the guy, I went to his house and thanked him.
I think Rocky Lhotka should get the credit, but I had to touch a CSLA.NET application recently {in my private practice on the side} and I was very impressed with the orderliness of the code. The app worked extremely well, but the client needed a few extensions. The original author had died tragically, and the new guy was unsophisticated. He didn't understand CSLA.NET's business object based approach, and he wanted to do it all over again in cut-and-dried VB.NET, without any fancy framework.
So I got the call. Looking at a working example of WinForm binding and CSLA.NET was pretty instructive about a lot of things.
Symbian OS - the old core bit of it anyway, the bit that dated back to the Psion days or those who even today keep that spirit alive.
And sitting right along side it and all over it is all the new crap created by the lowest bidders hired by the big phone corporations. It was startling, you could actually feel in your bones whether a bit of the code-base was old or new somehow.
I remember when I wrote my bachelor thesis on type inference, my Pascal-to-Pascal 'compiler' was an extension of a Parser my supervisor programmed (in Java). It had a pretty good structure as far as I can remember, and for me who had never done any serious Object-oriented programming, it was quite a revelation.
I've been doing a lot of Eclipse plug-in development and often had to debug into the actual Eclipse source code. While I haven't "inherited" it in the sense that I'm not continuing work on it, I've always been impressed with the design and quality of the early core.