Related
Now i know that this one is actually not a very technical question but one that has been bothering me for some time. Actually we are using a lot of C++ and PHP at our company and some of our developers are really hoping for a new and modern language to come by to help us getting more productive. I have been talking about what scala can do and the other coders seem to gain some interest in the language. The tough job is, how do you convince your boss to consider scala as a language for the company. I saw the presentation "Sneaking Scala into your company", but it deals with the situation that you are using Java at your company which we don't.
How do you fight of the usual "that is just esoteric stuff" and "we can already do that in $LANGUAGE" arguments. I was planing to give a talk about Scala, and since I don't have much time I need ideas how to get people interested in the language rather then setting of reactions like "currying? we can already do something like this with boost::bind".
How did you guys do it?
Regards,
raichoo
EDIT: Gave my talk yesterday, people were very excited. My company is going to give it a try! Thanks for all your suggestions.
If you don't already have killer arguments, what are you basing your reasoning on that Scala will make your company more productive?
Don't like something then hunt for reasons to use it at work. Let the reasons speak for themselves..
"A hammer looking for nails"
Using it to do some stuff around the side, as datamigrations, testing and similar things will make sure the necessary experience is built and can give it some exposure.
ScalaTest is really nice to help with acceptance/integration testing. (Yes, I know it is nice for unit testing, but I do not see that immediately happening with C++/PHP target code, and it would probably be unwise).
Proof of Concept and other Prototypes are great for 2 reasons
1) It showcases the capabilities
2) You are certain they will be thrown away if you have to reimplement them in C++/PHP
Now a bad time to introduce Scala would be when you REALLY need it : hopes will be high, it will not immediately work as intended, hopes are dashed and everybody will blame Scala. As a result it will be burnt for a long time in the organisation.
Sooner or later some suit will think it was his idea to introduce Scala and use it on a formal project. If that project is moderately successful, then it is sold.
These kind of changes are complicated people issues, and the harder you push, the harder you will face push-back. On the other hand the persistent mind can move mountains.
Redo some of your work related code in Scala and compare KLOC, code structure and performence, if it looks and works better, show it to your peers and your managers.
In other words:
Talk is cheap. Show me the code.
-- Torvalds, Linus (2000-08-25)
In case of our company (and I assume, many companies share the same scenario), move to Scala (from Java) was initiated by tech people, who 1. wanted to work more productive writing code (living in the 21st century utilize modern approaches), 2. have less troubles building concurrent applications (Actors concept promoted by Scala is a way simpler than Java thread-based concurrency) 2.1 have a simpler way of building scalable staged event driven architectures.
In our company, transition to Scala was more or less simple, because Scala was literlly sold to business people as a library to Java :) -> from their POV, we're still using the same platform (JVM), application servers, etc., but developers are having more fun from their work, and therefore, are more inspired and work more efficiently.
Maybe you could pitch Scala by showing off the suite of tools that is used for development? For example, if you are not already using Eclipse in your company, show your execs a demo of what a modern IDE can do for your productivity.
There is a book called "Fearless Change" (Linda Rising) that describes a pattern language for "powerless leaders" (I LOVE that role title!). SE-radio had a really motivating interview with the author: http://www.se-radio.net/podcast/2009-06/episode-139-fearless-change-linda-rising. Listen up on that interview to collect a few non-technical strategies that can help you in this struggle!
I haven't used Scala yet for any real business code, but I know people who have.
One group used it to write a tool to analyze log files. So they didn't use it for mission-critical business code, but for a non-critical tool to support the project.
Another person I know is an architect and he just went and wrote some Scala code on his own for some production code without telling his manager. After the code was deployed successfully he did tell it. One of the things he mentioned is that because Scala runs on the JVM, the people who support the application don't even notice - to them, Scala is just another library that's included with the application (they were already used to the JVM). Ofcourse this approach is risky and not everybody will be in the position or be willing to do this.
You could start small - use it as your personal preferred scripting language for small things that you need yourself. Tell your fellow developers about it and make them enthusiasts too. If they also start using it then you can step it up to make some side code for your project (such as for example that log analyser tool).
This isn't a really easy task. I would concentrate on the fact that you will be able to produce code and therefore products faster and with a higher quality. That's always the two reasons, business wants to hear from you and will listen to.
Maybe you can show an example of 1-2 very small projects you did in your company with C++/PHP and compare the effort, quality etc. with a similar/the same implemenation in Scala? This would be very impressive and should also convince people who are not on the coding side.
There was a very good talk at Scala Days 2010 by David Copeland:
Sneaking Scala into your organisation
The executive summary: Testing. You can use Scala for testing without affecting release code.
I like C# 3.0 features especially lambda expressions, auto implemented properties or in suitable cases also implicitly typed local variables (var keyword), but when my boss revealed that I am using them, he asked me not to use any C# 3.0 features in work. I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Since we are targeting .NET Framework 3.5, I cannot see any reason for these restrictions. In my opinion, maybe the only disadvantage is that my few co-workers and the boss (also a programmer) would have to learn some basics of C# 3.0 which should not be difficult. What do you think about it? Is my boss right and am I missing something? Are there any good reasons for such a restriction in a development company where C# is a main programming language?
I have had a similar experience (asked not to use Generics, because the may be confusing to my colleagues).
The fact is, that we now use generics and non of my colleagues are having a problem with them. They may not have grasped how to create generic classes, but they sure do understand how to use them.
My opinion on that is that any developer can learn how to use these language features. They may seem advanced at first but as people get used to them the shock of newness lessens.
The main argument for using these features (or any new language features) is that this is a simple and easy way to help my colleagues advance their skills, rather than stagnating.
As for your particular problem - not using lambdas. Lots of the updates to the BCL have overloads that take delegates as parameters - these are in many cases most easily expressed as lambdas, not using them this way is ignoring some of the new and updated uses of the BCL.
In regards to the issues with your peers not being able to learn lambdas - I found that Jon Skeets C# in depth deals with how they evolved from delegates in a manner that was easy to follow and real eye opener. I would recommend you get a copy for your boss and colleagues.
You boss is going to need to understand that language (and other) improvements are designed to give developers more capabilities, and make them more efficient in completing the task at hand, and that if he is not going to allow them for unknown reasons then:
The development team isn't producing at its greatest potential.
The company isn't benefiting from increased efficiency/productivity.
like others have said developers aren't worth their salt if they can't keep up with some of the latest improvements in the language that they are using on a daily basis. I suspect your boss hasn't done much coding lately and it is his inability to understand the latest language improvements that has motivated this decision.
I was told that these features are not standard and confusing for most developers and its usefulness is doubtful. I was restricted to use only C# 2.0 features and he is also considering forbidding anonymous methods.
Presumably roughly translates to your boss meaning...
These features are confusing for me, and I don't find them useful because I don't understand them.
Which is fairly symptomatic of the Blub paradox (well, or just sheer laziness). Either way there's no merit in what he's saying, and you should start looking for another job if he continues down that road.
If the project is strictly C# 3+ from now on, then you would not break the build by including these items. However, before using them you should be aware of the following:
You can't use them if the project lead gets to make the decision and votes no.
Other than that, you should use them where it makes the code significantly easier to maintain.
You should not use them in ways that are confusing, or unnecessary in the sense that they do not significantly improve the maintainability of the code. This does mean you should not use them where the code is effectively the same or barely improved.
If Microsoft didn't define the standard and these were features that they added to a non-Microsoft language, I would say your boss might have a point. However, since Microsoft defines the language and uses these very features in implementing significant parts of .NET 3.5 (and 4.0), I'd say that you'd be foolish to ignore them. You may not choose to use some of them -- var, for instance, may not be acceptable in all environments due to coding standards -- but a blanket policy of avoiding new features seems unreasonable.
The trickier bit is when should you start using new features, because they can be confusing and may delay development. In general, I choose to use new language features and platform elements on new projects. I often avoid using them on projects that are currently in development when the feature/framework enhancement comes out, deferring until the next project. On a long project, I might introduce them at a significant milestone if the amount of rearchitecting is small or the feature is worth the changes. Normally, I'd wait until the project is due for significant changes anyway and then evaluate if refactoring to newer features is warranted.
The jury is still out on the long term consequences of some features, but if their main rationale is 'it is confusing to other developers' or something similar than I would be concerned about the quality of the talent.
I like C# 3.0 features especially
lambda expressions, auto implemented
properties or in suitable cases also
implicitly typed local variables (var
keyword), but when my boss revealed
that I am using them, he asked me not
to use any C# 3.0 features in work. I
was told that these features are not
standard and confusing for most
developers and its usefulness is
doubtful.
He's got a point.
Following that line of thought, let's make a rule against generic collections since List<T> doesn't make any sense (angle brackets? wtf?).
While we're at it, let's eliminate all interfaces (when are you ever gonna need a class without any implementation?).
Hell, let's go ahead eliminate inheritance since its so tricky these days (is-a? has-a? can't we all just be friends?).
And use of recursion is grounds for dismissal (Foo() invokes Foo()? Surely you must be joking!).
Errrm... back to reality.
Its not that C# 3.0 features are confusion to programmers, its that the features are confusing to your boss. He's familiar with one technology and stubbornly refuses to part with it. You're about to enter the Twilight Zone Blub Paradox:
Programmers get very attached to their
favorite languages, and I don't want
to hurt anyone's feelings, so to
explain this point I'm going to use a
hypothetical language called Blub.
Blub falls right in the middle of the
abstractness continuum. It is not the
most powerful language, but it is more
powerful than Cobol or machine
language.
And in fact, our hypothetical Blub
programmer wouldn't use either of
them. Of course he wouldn't program in
machine language. That's what
compilers are for. And as for Cobol,
he doesn't know how anyone can get
anything done with it. It doesn't even
have x (Blub feature of your choice).
As long as our hypothetical Blub
programmer is looking down the power
continuum, he knows he's looking down.
Languages less powerful than Blub are
obviously less powerful, because
they're missing some feature he's used
to. But when our hypothetical Blub
programmer looks in the other
direction, up the power continuum, he
doesn't realize he's looking up. What
he sees are merely weird languages. He
probably considers them about
equivalent in power to Blub, but with
all this other hairy stuff thrown in
as well. Blub is good enough for him,
because he thinks in Blub.
When we switch to the point of view of
a programmer using any of the
languages higher up the power
continuum, however, we find that he in
turn looks down upon Blub. How can you
get anything done in Blub? It doesn't
even have y.
C# 3.0 isn't hard. Sure you can abuse it, but it isn't hard or confusing to any programmer with more than week of C# 3.0 experience. Your boss's skills have just fallen behind and he wants to bring the rest of the team down to his level. DON'T LET HIM!
Continue using anonymous funcs, the var keyword, auto-properties, and what have you to your hearts content. You won't lose your job over it. If he gets pissy about it, laugh it off.
Like it or not, if you plan on using LINQ in any situation, you're going to have to utilize some of the C# 3.0 language specs.
Your boss is going to have to warm up to them if he wants to utilize the feature sets you get from 3.5, which are numerous and worth your time investing in.
Also, from my experience in leading teams, I've found that using the 3.0 specs actually has helped devs readability and understanding of the code base. There's about a weeks worth of time that is spent by the dev trying to understand what the syntax means, but once they get it they much prefer the new way over the old way.
Perhaps you can do a presentation once a week on each feature to everyone and get some of the developers on your side to help convince management of the benefits.
I recently moved from a bleeding edge C# house to a C# house that was running mostly on dot.Net 1.1 and some 2.0 projects, using mostly only 1.1 features. Luckily management stay away from the code. Most of the developers love all the new features in the newer frameworks, they just don't have the time or inclination to figure them out by themselves. Once I managed to show them how they can make their own lives easier they started using them by themselves and we have migrated several projects to gain the new language features and better tool advantages.
Some people are just afraid of change, because maybe you'll make them all look stupid using fancy new technologies. Could also be that your boss doesn't want the team learning new things instead of getting work done the old fasioned way.
The var keyword can certainly be abused, but in most cases reduces redundant code. LINQ is the main thing you want from .Net 3.5 because of the huge time saving in the amount of code you have to write. Your boss should be encouraging you to use it. Also the base class libraries now take delegates are parameters, so you will be limiting yourself a lot by not using them. Lambda's are just some fancy syntactic sugar to make delegates cleaner.
I would refer you to Effectively Integrating into Software Development Teams and Leading by Example. Two really great articles on how to deal with teams that are afraid of change.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.
I've worked on a number of products that make use of code generation. It seems to be the only way to achieve both a high degree of user-customizability and high execution speed.
The downside is that we are requiring users to install a compiler (primarily on MS Windows).
This has been an on-going headache, because vendors like MS keep obsoleting compilers, and some users tend to have more than one compiler installed.
We're considering using GNU C, and possibly C++, but even there, there are continual version issues.
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
Ideally there would be some way to produce generated code that would be flexible, run fast, and not expose us to the whims of third-party providers.
Maybe I'm overlooking something simple, like Java. Any ideas would be appreciated. Thanks.
If you're considering C and even assembler, take a look at LLVM first: http://llvm.org
I might be missing some context here, but could you just pin yourself to a specific version? E.g., .NET 2.0 can be installed side by side with .NET 1.1 and .NET 3.5, as well as other versions that will come out in the future. So as long as your code makes use of a specific version of a compiler, what's the problem?
I've considered possibly generating assembly language, in an effort to get off the compiler-version-treadmill, but assembly languages are all machine-specific.
That would be called a compiler :)
Why don't you stick to C90?
I haven't heard much of severe violations of standards from gcc's side, if you don't use extensions.
And you can always distribute a certain version of gcc along with your product, say, 4.3.2, giving an option to users to use their own compiler at their own risk.
As long as all code is generated by you (i. e. you don't embed your instructions into other's code), there shouldn't be any problems in testing against this version and using it to compile your libraries.
If you want to generate assembly language code, you may take a look at asmjit.
One option would be to use a language/environment that provides access to the compiler in code; For example, here is a C# example.
Why not ship a GNU C compiler with your code generator? That way you have no version issues, and the client can constantly generate code that is usable.
It sounds like you're looking for LLVM.
Start here: The Code Generation conference
In the spirit of "might not be to late to add my 2 cents" as in #Alvin's answer's case, here is something I'd think about: if your application is meant to last for some years, it is going to face several changes in how applications and systems work.
For instance, let's say you were thinking about this 10 years ago. I was watching Dexter back then, but I guess you actually have memories of how things were at that time. From what I can tell, multithreading was not much of an issue to developers of 2000, and now it is. So Moore's law broke for them. Before that people didn't even care about what will happen in "Y2K".
Speaking of Moore's law, processors are indeed getting quite fast, so maybe certain optimizations won't be even that necessary. And possibly the array of optimizations will be much bigger, some processors are getting optimizations for several server-centric stuff (XML, cryptography, compression and regex! I am surprised such things can get done on a chip) and also spend less energy (which is probably very important for warfare hardware...).
My point being that focusing on what exist today as a platform for tomorrow is not a good idea. Make it work today, and surely it will work tomorrow (backward-compatibility is especially valued by Microsoft, Apple is not bad it seems and Linux is very liberal about making it work as you want).
There is, yes, one thing that you can do. Attach your technology to something that just won't (likely) die, such as Javascript. I'm serious, Javascript VMs are getting terribly efficient nowdays and are just going to get better, plus everyone loves it so it's not going to dissappear suddenly. If needing more efficiency/features, maybe target the CRL or JVM?
Also I believe multithreading will become more and more of an issue. I have a gut feeling the number of processor cores will have a Moore's law of their own. And architectures are more than likely to change, from the looks of the cloud buzz.
PS: In any case, I belive C optimizations of the past are still quite valid under modern compilers!
I would stick to that language that you use for generating that language. You can generate and compile Java code in Java, Python code in Python, C# in C#, and even Lisp in Lisp, etc.
But it is not clear whether such languages are sufficiently fast for you. For top speed I would choose to generate C++ and use GCC for compilation.
Why not use something like SpiderMonkey or Rhino (JavaScript support in Java or C++). You can export your objects to JavaScript namespaces, and your users don't have to compile anything.
Embed an interpreter for a language like Lua/Scheme into your program, and generate code in that language.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
With the interest of creating a roguelike RPG (such as Nethack, Rogue, and ADOM), which programming language would be most suitable and why?
With the language that you choose, be sure to list any libraries or facets of the language that make it particularly well-suited.
Way back in the day I tried to write Roguelike games using QuickBASIC out of all things (it was 1988.) Not the recommended approach...
There are still some development circles out there. Here's an FAQ on Roguelike Development and also a blog dedicated to the same.
My language of use (I'm trying to create roguelike too) is Python, because:
It's high level programming language, I don't need to think about memory allocation all the time, etc, but keep my mind on algorithms.
There's tons of useful libraries for almost everything. Recently I've found TDL/libtcod which can be useful for roguelike development.
With bindings you can easily use C/C++ libraries or even write few critical functions in C/C++, and use them.
It's the most readable programming language I've ever seen.
While programming in Python I've learned to use internal documentation. It's very helpful thing, I just read my code few months later and I still know what it's doing.
That's a very personal choice as always :-)
I wrote my Roguelike game (Tyrant) in Java for the following reasons:
Very portable (even with graphics)
Garbage collection / memory management
Lots of good free / open source libraries available (helpful for algorithms, data structures and manipulating save game files etc.)
It's a statically typed language - this has performance and robustness benefits which I judged to be worth the additional coding complexity
I wanted to hone my Java skills more generally for use in other projects
EDIT: For those interested it is open source, all code is available at SourceForge
Well I've made a couple roguelikes in C, spending a fair amount of time at roguebasin, which is a great site for anything related to roguelike development.
As for what language you should use, I don't really see it making a huge difference. I pick C because of the portability, and a lot of libraries work well with it.. But an object oriented language can clean up some things that you may not want to keep track of.
There aren't any languages that I would consider to be specifically greater than the rest for roguelikes. If you're making it graphical, you may prefer something that has that built-in, such as flash / silverlight. But even then there are libraries for any other languages that bring them to about the same degree of difficulty in that regard.
So I'd say take a language you know and like, or that you don't know and want to learn..
Most of these answers are great, but there's something to be said for the combined power of object-oriented stuff and low-level commands that can be abused in C++. If you're looking for some inspiration, the C sourcecode to NetHack is widely available and documented well enough that you can certainly poke around to learn some things. That said, it's a huge project that's been growing for decades, and not everything is as clean as you're going to want things for your own project - don't get roped into making poor design choices based off of what you find in NetHack.
Honestly, though, in terms of what you use it probably doesn't matter at all - though I'd highly recommend using an OO language. There's so much crap to handle in a roguelike (heck, any CRPG really) that OOP is the easiest way of staying sane.
The original nethack was written in C, and the source is available if you want to get some ideas about how it was written, and the challenges you may find which might be a good way to start deciding on a language.
My first question would be whether the game is going to have a web based UI or be some kind of console/window affair like the original Rogue-like games? If the former I would say that any language you're comfortable with would be a good choice. Ruby on Rails, Python/Django, PHP/CakePHP, etc. would all be great.
But if the answer is the latter, this is a game that you want people to be able to download and install locally, I'm going to go with Java. It's a great language with no memory management for you to deal with. It achieves very high performance thanks to just-in-time compilation and optimization, and it has an extremely rich library to help you with data structures, Swing makes for some really beautiful UIs, and the 2D library allows for the most rich cross-platform rendering outside of PostScript. It also has the availability across Windows, Mac OS X, and Linux that you're not going to get from some other choices.
Finally, distribution of your application is easy via Java Web Start as well, so people can download and install the game with just a couple of clicks once they have Java and keep it on their machine to run as long as they like.
For making any game, any language will be right if :
you can use it (you are able to use it, by knowledge or if it's easy enough to learn right now for you or your team)
it produce applications that runs on your client's computer
it can easily produce applications that runs fast enough for your game's needs.
I think that for a Rogue-Like, any language you know will be right as far as it runs on you target. Performances are not really a problem in this kind of game. World generation can require high performance if your world generation is really complex though...
just go with something that will handle the low-level details for you. whatever you know should work.
hey, they can write one in javascript.
I recommend Actionscript for those games.
You could consider Silverlight.
It sits on top of C# and .Net so theres not much need to worry about memory management. With SL you'll get built in support for scene graph type rendering - culling of things not on screen, Key board, mouse events, clicks on objects etc.
There's an initial learning curve, but I find it's a great environment to work in.