Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Almost all conventional languages today represent programmers intention as text source, which is then (lets say for sake of simplicity) translated to some bytecode/machine code and interpreted/executed by a VM/CPU.
There was another technique, which, for some reason, isn't that popular theese days: "freeze" the run-time of your VM and dump/serialize the environment (symbol bindings, state, code (whatever that is)) into an image, which you can then transfer, load and execute.
Consequentially, you do not "write" your code in a usual way, but you modify the environment with new symbols, while in "run-time".
I see great advantages to this technique:
Power-boosted REPL: you can introspect your code as you write it, partially evaluate it, test it directly and see the effects of your changes. Then roll back if you've messed up and do it again, or commit it to the environment finally. No need for long compile-run-debug cycle;
Some of the usual problems about dynamic languages (that they cannot be compiled, as compiler cannot reason about environments statically) are obliviated: the interpreter knows where what is located and can subsitute symbol references with static offsets and do other optimizations;
It's easier on programmer's brain: you "offload" different contextual information about the code from your head, i.e. you don't need to keep track about what your code has already done to some variable/data structure or which variable holds what: you see it directly in front of your eyes! In the usual way (writing source), programmers add new abstractions or comments to the code to clarify intents, but this can (and will) get messy.
The question is: what are disadvantages of this approach? Is there any serious critical disadvantage that I am not seeing? I know, there are some problems with it, i.e.:
try building a module system with it, that will not result in dependancy hell or serious linkage problems
security issues
try to version-control such images and enable concurrent development
But these are, IMHO, solvable with a good design.
EDIT1: concerning status "closed,primarily opinion-based". I've described two existent approaches and it is clear and obvious that one is preferred over another. Whether the reasons for that are purely "opinion-based" or there is a reasearch to back this up, is unknown to me, but even if they are opinion-based, if someone would list these reasons for such an opinion to develop, it should, actually, answer my question.
As a daily user of smalltalk, I've to say I haven't found any fundamental disadvantages and have to agree that there are lots of advantages.
It makes metaprogramming, reasoning about your program easy, and much better supports refactoring and code rewriting.
It requires/develops a different way of looking at your code, though. Smalltalk has little to offer to developers who are not interested in abstraction
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Recently My Project Manger has asked me to work on InterSystems Cache ObjectScript. Earlier i used to work as Java Developer (J2EE). So my question is how different is Cache from java. Comparison would be great to have.
Caché ObjectScript is very different from Java and has very little in common. It is more like dynamically typed compiled scripting language with meta language built in (class definitions) and with a large number of features you need to know to write the good code. All the code is compiled to a low-level (but pretty readable) so-called routine code and is processed by DBMS Caché and its application engine.
Take for example this reference. As you may notice, there are many weird symbols and structures like $, $$, $$$, ##class, &sql(...), &javascript<...>, #dim, $System, .#, $get, $zu(...), %, ^%, { ... }, ... (this list is big). Some of the language features are very unpredictable from the first glance. For example, function $get(...) looks like a fundtion but silently acts like a try/catch statement, as well as $data and some other system functions.
So prepare to work with InterSystems documentation! Also, recently developed InterSystems community is a great resource. And while Googling, you may find quite a few answers out of the internet, but just keep in mind to search with “intersystems” or “objectscript” keywords. But many things you won’t find there, and in this case you should use InterSystems docs or community to ask the questions. Once you will get used to the language (which for me took over 6 months), you will feel more confident in it.
Also it is worth mention that Caché ObjectScript is literally “dinosaur” language, which involves and upgrades over time. That’s why there are so many different features. Some of them you shouldn’t use anymore: for example, instead of writing code in routine, like people did before OOP concepths were introduced, you should use classes. ObjectScript’s JSON capabilities (ability to write JSON inside ObjectScript) was intoduced just approximately 1 year ago. And you may find a plenty of “prehistoric” code in Caché and should take it normally: it is a really huge ecosystem.
Hope this helps, happy hacking!
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Whereas I'm fully aware of Scala and Akka actors, and other, non-stdlib concurrency packages for Scala, having gotten used to Gevent (it's a green threading + non-blocking IO framework/library for Python, that has not been getting the attention I think it deserves compared to stuff like NodeJS and all sorts of Actor frameworks) and how easy it is to write concurrent code with it—just write code as if with "real" threads but no actual OS threads are used, so you can have thousands of them, like Erlang processes, and all existing code Just Works—I have to say I'm not currently too much in love with the rather limited (and somewhat hard to compose with "normal code") way in which concurrent code needs to be written when Akka-style actors are used.
Now, there is Kilim, which appears to be doing what Gevent is doing (except it's using a CPS transform not runtime stack manipulation); also, Scala is known to be able to fully interoperate with Java. However, does this interoperability fully extend to the level at which Kilim operates? If yes, what are the key things to keep in mind when a combination of Scala and Kilim is implemented? I've found some resources (e.g. https://github.com/lllazu/kilim-scala) on this by googling but nothing clear or substantial.
Note: I'd also be interested in aspects such as:
why this is a typically discouraged approach to start with (i.e. I should be using Akka);
that I'm wrong and Akka-style actor code isn't limiting, or is not limiting enough to have any considerable effect on the (high level) style of code;
Feel free to have a comment on anything related
In C/C++ the most generic and least invasive approach to asynchronous execution seems to be the callbacks and I prefer to stay with the callbacks in order to be able to reuse the most libraries out there. With a bit of coroutine magick any callback-oriented library can be used imperatively, that is, for any method foo (callback (bar)) I can make a wrapper bar = foo (cbcoro) which can be used withing a normal imperative control flow (while doing context switching behind the scene).
I'm starting another project in Scala now and going to try to use the delimeted continuations in a similar way.
P.S. Bytecode instrumentation which works fine with the Java bytecode code can still fail with the Scala bytecode, I've seen this happen with db4o and DataNucleus, therefore you need a good support (or a very good knowledge of the tools in question) if you're going that way.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Im starting working with Moose/Perl and im searching for an UML Tool to create diagrams and representing the Moose OO System. I already worked with Astah (former Jude) but it is designed for Java OO System. Can someone give recomend other UML tool to work with Moose/Perl?
My two cents:
I have written an extension (a .xom file) for Sybase PowerDesigner. This tool has a powerful metaclass editor that you can script with vbscript and a proprietary language, GTL. It also has large collections of customizable metaclasses and templates.
My PowerDesigner Extension is quite hacky and contains some stale code that I didnt clean up. Therefore I haven't published anything. It works for me, and only for me. Some lessons learned, from the top of my head:
I wanted to do UML modeling and code-generation, do you want to do that, too?
Moose is quite attribute-heavy so a UML approach is worth doing in this respect.
Didn't use roles much, but I tried to map them to interfaces anyway.
I am not satisfied with how to model relationships. Lots of edge-cases and "impedance mismatches" of UML concepts and moose/perl concepts. (BTW, Whats the moose equivalent of an "association class"? )
Native traits are a nice feature in Moose but I haven't succeeded in creating a GUI for editing them
I also hurt my brain by designing a comprehensible GUI for type coercions (I often need to check + coerce date values)
Static attributes are an important feature in UML but less important in moose. The problem is that there is no "static" keyword in perl/moose, but you have to declare a "use MooseX::ClassAttribute" or whatever it is called, and do it only once per class, but in the right place (order matters)
the code generated is impossible to pretty-print, so usually I send it through perltidy right away, to bring it to a "canonical" form, making diffing and versioning / committing to SVN easier.
When a class is generated , the compactness of Moose class is gone, you'll have svn properties, header comments, lots of "use" + "use lib" statements, lots of POD, some comment lines after each sub declaration with parameter-passing doc, the the obligatory footer ("no moose ....")-
unfortunately, reverse engineering Perl code (to update a UML model from code) is impossible. Thus, at some point I must stop working in the UML tool, and start to edit the perl code directly, abandonig the model. Checking back in these changes must be done manually at some later time, is very time consuming and requires care.
Advantages:
Generating properly POD-documented code is the main productivity gain you'd get by doing all this UML modeling, IMHO. Good for "enterprisey" programming environments.
you can autogenerate *.t files with testcases (or stubs of testcases). Requires some thinking to design smart tests, and to avoid the problems Dave Rolsky has written about in this blog post: "add(ing) absolutely nothing that isn't already tested by Moose itself"
You can define custom checks in the model such as "check if builder methods for all declared attributes exist, and if they don't exist, create a stub, or (ask me what to do)"
easy mapping of nightmarish database tables to moose classes. (I have to work with lots of multi-column tables that cannot be touched). Build your own graphical ORM-Mapper!
there might be even more advantages
I haven't seen a UML tool yet for Moose. It wouldn't be that difficult to build one, just a little labor intensive. Mostly it would require crawling the meta-class tree for a given class and outputting the proper UML markup for each step. If are interested in building something like this, you can stop by #moose on irc.perl.org. Someone I'm sure can help point you in the right direction.
Just stumbled across your question while looking for "UML Moose perl".
One of the other links thrown up was to a utility called umlclass.pl that looks quite interesting.
I'll post a follow-up after seeing how well it works.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
There are probably different kinds of code generation. In RoR for example, Rails can create skeletons for models, controllers, etc. But the developer has to complete those skeletons.
Now some times there are projects where many core artifacts in their entirety get generated according to a set of definitions or models.
I am mainly interested to know the advantages and disadvantages of this latter type of code generation.
The main advantage is that it does the work for you, its repeatable, and that the code will most likely work (that depends of course if the person who wrote the generator knew what they were doing). It can remove the a lot of necessary time doing menial coding tasks. For example, is it really worth your time to write objects which are nothing more than containers for data from the database, or is it better to have some program automatically create these for you?
The big disadvantage is that it forces you into writing the code that is compatible with the generated code. Most of the time this isn't a problem, but it can be a real hassle when someone comes up to you and says "Hey, can we do X?" and that conflicts with the generated code. If the generator is good, it will allow you to change functionality, but that almost always increases the complexity of the code generated etc. This complexity has a price. It's more difficult to understand, and it can be less efficient that code you write yourself. This of course varies by situation.
The main problem with this style of programming is that it contaminates a view of your project. It no longer allows you to practice DRY. It is useful to have a clean separation between that what is automatically generated, and that which is written by a human. Most systems, especially file-based ones, do not support such a separation well. In systems that have good introspection capabilities (e.g. smalltalk images), building a dynamic object structure by walking the definition/model is preferable.
In illusion-based programming (as practiced in large companies and government agencies) it is very useful because it allows the generation of very impressive stacks of documentation and show impressive implementation performance as measured in lines of code per man month. There your most important skill is of course timing your disappearance act.
I think the most important thing to keep in mind is WHY you want to generate source code. Is it, for instance, because you are more fluent with UML than any programming language and hence want to generate object-oriented classes from that graphical model?
Is it because you expressed a schema definition in any language (SQL DDL for example: jOOQ, XSD for example JAXB code generation) and want to generate a model from that?
The advantage of code generation is always the fact that you express something only once (as in DRY, like Stephan stated). This is a very good practice that made it deep into extreme programming (among other processes). When you keep things DRY, you will not run the risk that the model differs from its glue code. On the other hand, you might blow up your glue code because it will exactly match its underlying model. Typically, you have one class/type/object per RDMBS table or per XML element.
If, however, you use code generation because you're more at ease with a modelling language (as in MDA, or model-driven architecture), you might run the risk that your generated code is not good enough (lack of detail) or too complicated (lack of simplicity) because - for instance - UML is not suited for solving problems in detail.
In any case: code generation can be very helpful if the generated code can be used AS-IS and does not need any customisation. As soon as you start customising generated code, it may become a maintenance nightmare.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Now that Apple relaxed the restrictions on developer tools/programs, I wonder what tempts developers to other languages than Apple offers by default, Objective-C, which is quite fun to program with. What missing feautures makes you not to program with it but something else?
Lack of Objective-C expertise or a large/complex code base in another language would be among common reasons.
Cross-platform coding might well be another.
I haven't done any iPhone development yet, but generally speaking, here's a few reasons:
Cross-platform development
The other language suits your coding style better
The other language is a better tool for the job
You are comfortable in the other language and don't have the time / budget / motivation to learn Objective-C
Existing libraries / codebase
Specific tools you might want to use
Testing some concepts in Objective-C can sometimes be kind of tedious to set up. Sometimes you just want to see how a single method works or play around with an object's functionality to see how it works.
Setting up a new project is somewhat tedious, and it's not always feasible to incorporate the test code in to a new project.
In this case, I do one of two things:
Keep an empty project around specifically for testing things
Drop down to the Terminal and use irb (or PyObjC) to play with the objects in Ruby or Python.
In a nutshell, the thing that's missing is the ability to use Objective-C in an interpreted manner. You have to use another language (like Ruby or Python) to do this.
I recently wrote some networking code in Python, then had to translate it into Objective-C for use on the iPad. A typical line of clear Python would become five or ten lines of busy-work C. I just work faster in higher-level lanugages; the language puts up less resistance, requires fewer forms to be filled out.
I have ported a couple of tiny language interpreters (for my own use, not for App store distribution) to the iPhone. This allows me to write short snippets of code on the road, without having to carry my Mac, and run them locally. I don't know of any small Objective C interpreters, and the language is not really designed for interactive use.