The Framework/IDE Knowledge Trap [closed] - frameworks

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We don't teach children calculus first. We first teach them arithmetic, then algebra, then geometry, the analytical geometry, then finally calculus.
Why then, do we teach our computer scientists frameworks and IDE first. Some curriculum do force students to learn computer science fundamentals, but the vast majority of graduates that I see could not compose a framework of their own to save their lives.
Where then is the next generation of tool builders?
How can we promote the understanding necessary to create frameworks and development environments?
This is of course a generality. Not all education is lacking, but it seems to be the majority and it brings down the quality of our profession as a whole.

I think the analogy is a bit off. A better analogy would be "We don't teach our kids to use calculators to add and subtract, why teach programmers to use an IDE to program?"

Get rid of HR departments that require X years experience in Y. The universities are just tailoring their course to the HR department's requirements.
I employ graduates who can code in something (I really don't care what language) and who can learn.

I see your point, although I think the math analogy doesn't quite fit. You have to know basic arithmetic to be able to get anything done in any other math discipline.
When I began programming frameworks were mostly unheard of. If you wanted a binary tree, by God, you went and wrote one. In C or Assembler. That was basically it, so to get anything done at all you had to know a lot.
Today, Frameworks and IDEs and designers make it possible for "noobs" to create actually pretty brilliant things without knowing the first thing about how to build a framework, or a compiler, or manage memory allocation.
The real issue is, what about all the dingbats that think they are awesome, great programmers because they used Frontpage or Access? Managers have a hard time telling the difference between that kind of programmer and one that really knows software development as a discipline.
So, specifically, why is it that way? Because everyone wants a job and nobody hires programmers that know how to build a binary tree. They want programmers that know .Net or J2EE, etc.

I would argue that there is probably enough work out there for 9 to 5 programmers who can start at the framework level and go up from there. The truly good ones - mostly your program as a career and/or program as a hobby - are going to get the knowledge they may have missed in college over time anyway. You can't force everyone to be a wonderful programmer no matter what curriculum you teach. Inquisitive students are going to learn about the fundamentals whether its taught to them in class or entirely on their own.

There are tool makers and tool breakers. And of course there are tools, but let's not go there.
If you have a good look at an automotive workshop, you will see a lot of funny little tools that you don't see on the shelves in hardware stores. Like the ones for pushing back brake caliper pistons. Or the clamps for compressing valve stems so you can get the collets out with one hand while talking to your mates about nailing the new secretary (instead of watching them fly across the room when the spring slips out from your screwdriver).
These were designed by mechanics. They're really effective, generally small and cheap, and totally incomprehensible until you seen them in action.
Most of the profound changes in automotive technology were bottom-up, but top-down is also needed. Individual mechanics can't make fundamental technology changes like the switch from cast iron to alloy heads. A new broom sweeps clean, an old broom knows the corners. You need both.
But I digress: the point is that the mechanics couldn't design these tools if they lacked fundamental skills and knowledge. My father built me an entire motorcycle from scrap iron when I was a kid. As an adult, because I lack his skills and knowledge and modes of thought, I can barely maintain the bike I bought from Honda, much less take to it with an oxy like Mr T in a creative frenzy.
With code, I am as my father was with steel. Donald Knuth is my constant companion, and when the wireless protocol for our GPS loggers needs to be implemented in .NET it's me they come to see. The widget monkeys wouldn't know where to start.

I think the problem is in fact the GUI paradigm in general.
Microsoft made using computers much easier, they popularized the Graphical User Interface. They brought this interface metaphor, (the desktop, the file) to the domain of programming as well and very effectively too with their Visual Basic tool.
But just as the GUI obscures what happens "under the hood" so does the IDE obscure the manipulation of bits and bytes. The question is, of course, risk to reward ratio - how much understanding do programmers lose in exchange for productivity?
A cursory look at "The Art of Computer Programming" might show why IDEs are useful; "The ultimate packing density is achieved when we have 1-bit items, because we can cram 64 of them into a single 64-bit word. Suppose, for example, that we want a table of all odd prime numbers less than 1024, so that we can easily decide the primality of a small integer. No problem; only eight 64-bit numbers are required:
p0 = 011101101101001100101101001001001100101100101001000101101101000000
p1 = . . ."
Programming is really hard, you can see how an IDE might help. :^)

Learning the abstraction is easier than learning the details when it comes to programming. It's harder to teach someone to hand-code assembler to print "Hello World" than it is to have them throw together a form with a button on it that shows a "Hello World" message when the button is clicked.
You didn't know how to build the engine of a car before learning to drive, did you? Because it's not necessary in order to drive. In the same vein, you don't need to learn how a linked list or binary tree works in order to maintain a list of names and search them.
There will always be those who want to get under the hood and learn the "why" of things, but I don't think it's required to get things done.

I always screen applications by asking difficult questions that they could only answer if they understood how something really works. I think it is a real shame colleges and universities are teaching people framework based development but not focusing on core software principles. I agree that what matters more than anything is someone who understands how programming works and has the drive to learn anything they can about it.

Most universities I know of have an introduction to computer programming course that teaches basic programming concepts. Unfortunately it is impossible to teach programming without actually writing code.
The problem is that some prefer to teach this course using some OO language such as JAVA or C# and so the students must use Visual Studio (or the Java equivalent).
It is very hard to explain the basic concepts when the IDE forces you to work in a certain way.
I think that the first language students learn should be functional language such as C. This way you have less layers of abstraction between them and the basic CS concepts.

Agree with cfeduke.
I looked at the work for the same CS courses I did from 2 years previously, and they were way harder. 5 years previously, way way harder.
The CS bar is being lowered more and more, presumably because there are more and more jobs that don't require any working knowledge of any of the complicated CS subjects. There are huge numbers of jobs for people to just cut code.
Since traditionaly people who wanted to be programmers did CS courses as coding has gotten easier this is still the case.
What really needs to happen is for CS to not be a requirement for professional software development. Instead there needs to be another curriculam that focuses more on getting people out the door and cutting code.
This would leave CS to be that course for you next generation of tool builder.

Related

Siemens PLC programming best practices [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
My question is pretty simple. Is there any useful place for learning to work with Siemens PLCs?
Full Disclosure:
I was a Software Engineer for Rockwell Automation working with their A|B PLCs
You probably won't like my answer
To put it plainly programming PLCs whether you're using Ladder Logic, Structure Text, Instruction List, Sequential Flow Chart, FBD, or Continuous Flow Chart isn't the same as programming software in a language like C++, Java, JavaScript, etc...
Simply put, there is not one set of "best practices" that fits every use case. The reason for that is, because unlike your standard software development which you can apply principles like the SOLID principles to always make your code easier to read, maintain, and extend. PLC programs are associated with a very real physical process and physical machinery. Often times what you find in the industry is that every plant/manufacturer/facility establishes their own set of best practices given their facilities needs and process.
To give an example:
Scenario 1:
The logic used to run the distillation process for a small local brewery may include sub-routines or even a loop. They may allow 5 or less warnings in their code, and allow a few unused tags. That is totally fine, because they are making beer, the process isn't critical, a bad batch won't kill anyone, and they only have 2 pumps that their using the logic to iterate over. So if there is a problem that needs trouble shooting the logic in the sub-routines or loop won't be too much of a headache.
Scenario 2:
I am a global pharmaceutical company producing 100's of millions of life critical drugs each year (say insulin). Now my logic is has zero sub-routines, no looping, I have zero tolerance for errors or warnings, and absolutely no unused tags. Why, because I am a highly regulated industry and if their is an issue with one of my products, people may die. Also why no sub-routines or looping, because I am a huge company with hundreds of pumps, mixers, etc... When one of those pieces of equipment go down I don't want to look at some horrible looping logic that is responsible for the logic of hundreds of pumps. I want to look at one select piece of the logic that I can quickly understand, correct, and get my line back up and operating.
I am sure you can find some articles or courses out there (like the one you already took) that explains some basic "best practices", but in the real world you will need to adapt your logic to every individual scenario in order to achieve the best outcome. That is my humble two cents on the matter, best of luck to you!
Udemy - there are some courses there, though I haven't tried them myself.
I've watched lots of useful videos on YouTube.
http://www.plcdev.com/siemens_simatic_step_7_programmers_handbook -
quite old, but could be usefull.
Siemens forums, official manuals, guides. There is lots of info there, quality varies sometimes, but mostly good.
BTW, a nice thing about Siemens is that you can often look up things just by searching the web. That is not the case for some other PLCs...
Good luck!
If you work already in a factory. Read the code that's run in PLC-s. And start modifying it, if needed. Thats how i started, I was initially lowly automation guy. Pulled cables, changed broken sensors etc.
If you don't, and you need a break to the field, then as ordinary tech worker, the path is usually from electrician or automation engineer. Or as entreprenuer/independent contractor, i have seen people just do it. Like win a contract for some public company request, do some schematics, write code, do electrical montage all by themselves. Or just do parts of it with other contractors. You need previous experience to pull it off
As for some practices:
If you are modifying existing code. Always use existing style, existing functions and blocks.
Do not use programming patterns from ordinary IT world in low PLC code. Or use with caution. Reason for this is that your code probably has to live for years and years, and has to be debuggable. Patterns usually add layers of complexity, complexity leads to harder debugging. In automation world it's usually better to debug stuff thats closer to hardware.
If you are starting to make project where you have tens or hundreds of sensors/motors/actuators, start using reusable blocks.
All best practices are learned in the field, sadly theres no other way. I know it's kind of catch22 sometimes. Need work to get experience, need experience to get work. I entered automation world, and later IT world the same way: get a job and the low end, maintanence guy or junior IT developer, gather experience, in a year or two you will be in mid-level.
And don't lose any of those constraints while your programming PLC :
PLC programming is very low level programming
memory size matters, each byte must be important
logical have to be concise and as short as possible : sometimes you have to be good in math !
the machine you're working on is dangerous and can provoque product, equipment or human damages
the machine you're working on is expensive and is built to produce for years
It's the same as in computer programming : each programmer has its own way to program, there's no truth. Sometime you'll find some interesting existing code : don't hesitate to re-use it if it looks smarter and is more efficient.
Find your way and keep in mind the machine you're working on is dangerous for you and the people walking around (it's not always the case but it's important to keep this in mind while programming).
And moreover: don't forget the first rule in industrial automation : if it runs correctly, don't touch it !

Which language for my dissertation project? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I am doing my dissertation project on NP-Hard Problems: I am going to implement various algorithms for problems such as the partition, the subset sum, the knapsack, etc and then compare the results, the running time, etc. Also, I am going to see what happens with the algorithms when you modify the problem (how does the algorithm behave on the reduced problem, etc).
Now I picked this topic as my project because I am interested in theoretical computer science but I am also not sure if I want to go on as an academic/researcher or join a company/startup and this project has both a theoretical and a practical (actual coding) side.
My question is, which programming language should I use? Should I stick to what I feel more familiar with (Java and maybe Python), or should I go with the web languages (HTML, CSS, PHP, RoR, etc), having in mind that web development skills are on high demand nowadays?
EDIT: HTML and CSS would be obviously used just for the UI.
I want my project to be something that will impress in an interview (for either a job or a masters course) and I am not confident that "yet another project in Java" can do that. I understand that as long as the work on it is good and the result is satisfactory I should be ok but if, let's say, using Ruby can give me some points I am totally going with that. In the same time, I understand that deciding which language to use is part of the project so I am not willing to complicate things just to try and look cool.
Thanks in advance!
EDIT: In case this changes any of the answers, this is a undergrad. dissertation project, not a PhD one.
First of all this is a subjective question, not perfectly suitable for SO, but we forgive you :)
Contrary to popular opinion here (looking at the previous answers), if you're trying to solve NP-Hard problems, I would definitely not write the programs in C or C++. Mainly because dynamic programming methods tend to look like absolute dog poop when written in low-level languages. For example, here's someone's dynamic programming solution to the knapsack problem: http://www.joshuarobinson.net/docs/knapsack.html.
It's well-written and well-formed, but barely readable simply due to the sheer amount of malloc, memcpy, and free you need to do. Go with Java or Python, no question about it. You want people to actually read (and maybe even enjoy?) your dissertation, I would assume.
Don't write it in PHP or Ruby because those languages aren't particularly applicable to computer science theory. With that said, if you're applying for a web-dev job and you're trying to impress your future employees with a knapsack problem or dynamic programming NP-Hard solvers, it's like shooting a sparrow with a cannonball.
If your project's subject is impressive, no one will care what language it's in. Do it in the language you feel is appropriate for the task. Knowing how to make the appropriate language choice and defending that choice should be more impressive than "OMG I used RoR XSL ActionScript CSS!!!"
Also, how long do anticipate this project will take? If you go with a language that's flashy and trendy today, do you know it will still be cool and popular when this project wraps up? Just saying in another way, popularity is not the reason to choose the language for something like this.
if you can invest effort and time, then i recommend c/c++. it will be an impressive add-on skill.
My language of preference would be Python. You could use Django and, in my opinion, it would be very applicable to things that are being done in the industry (especially with startups). Plus, you can't beat Python when it comes to readability and speed of development.
I would have thought that Python would be doing too much clever stuff under the hood to really be able to measure relative performance accurately.
Wouldn't it be better to use a lower-level language like C? Employers would respect you more for that than using something because it's "cool".
The languages you know look fine to me. The old saw is that a CS PhD makes you unemployable anyway, so I wouldn't worry about it. :-)
The other ones you mentioned are mostly specialized web presentation languages. I'm not real sure how one even goes about implementing the knapsack problem using CSS...
Well, as much as this might look fine on the web page, it seems to me that Java would do a better job doing what you need.
PHP, HTML and CSS knowledge is good for job finding, but not applicable very much on the subject you picked.
Also, I noticed a bunch of answers, so I guess this is a question very much related to personal taste and opinion. Hm... You asked for it, anyways ;)
Since you're already familiar with Python, I'd recommend using it. You can use the popular scipy and numpy libraries for your project. You'll probably find something of use in them.
That would be the core, or backend part of your project. When this part is finished, you should think about polish and presentation. You don't want to have an impressive looking presentation with wrong calculations.

Encouraging good development practices for non-professional programmers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.

Why people don't use LabVIEW for purposes other than data acquisition and virtualization? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.

Alice and Scratch ages 8+, how about under 8yrs old? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I just found out about Alice and Scratch. I will be implementing those pretty soon. But, I wonder, what would be good material for kids from 1st grade thru 4th/5th?
Toontalk is something to look at. I used it successfully with group of ten- to eleven-year-old children, and it's been used with much younger kids. Of course, I think Scratch has too. But Toontalk is specifically built to feel more like a game. It's essentially a 3D world that kids can explore and interact with, and in which they create programs by training robots. Highly recommended.
http://www.toontalk.com
http://playground.ioe.ac.uk/ABOUT.HTM
http://playground.ioe.ac.uk/games.htm
The Toontalk 3d environment ingeniously operates as a metaphor for sophisticated programming concepts. There are quite a few academic papers linked on Toontalk site about the educational theory behind Toontalk. Here's one interesting paper that describes how the Toontalk 3d objects map onto abstract programmming concepts.
I'll admit, I'm not a professional educator. And my info on kid's programming education may be too obsolete, but my mom was as close as they came to a computer educator in the 1980s, and here's some tricks from her book.
When I was 8, she had no problem teaching me logo
I would think that before reading skills are somewhat developed, it would be hard to teach the semantics of any programming language - however simple. And the first "aha!" for programming (to me) would be realizing that if you give really simple commands to the computer, it will do neat stuff for you.
If I had to teach kids that were still working on reading fundamentals, I'd probably focus it on games that are not directly connected to a programming language, but which do involve logic development. Things like:
Assigning letters to codes and translating from letter to code
Games where you follow simple rules to move things around, emulating data structures.
Puzzle games making use of computer science concepts - like shortest path algorithms. Not in analyzing the algorithm, but in developing it in the first place.
I'm afraid I don't know of a pre-built set of material for this sort of stuff. But I think that you might be able to create your own.
The limits would be the cognitive abilities of the kids -- I know that there are certain points where the theories say that kids can't do certain types of abstract concepts. For example, I was just listening to an example that mentioned that pre-schoolers can't handle the idea that something may have more than one name. Not quite knowing where those points of cognitive growth typically occur, I'm not 100% certain of what game would be right for what age group -- it might be trial and error.
I use Alice to teach children ages 11-14. It works well for them, but I would not use it for children much younger than that unless it was a one-on-one situation. I can't speak for Scratch.
One thing I can speak for though, is Lego Mindstorm programming. There is a cost to it, unlike Alice and Scratch, but it is very approachable for 1st through 4th grade. See if the First Lego League has a group near you so you can join up with others to help with costs.
Scratch is the simplest programming language I have found for kids. You can use it like logo, but it is much nicer.
I think Alice is too hard for kids of age 8 years.
Microsoft has also Small Basic and shipped v0.2 recently.
This version also includes a cool new
feature that allows students to easily
graduate from Small Basic to Visual
Basic with the touch of a button.
Check out the full release notes in
the Small Basic blog.
Small Basic is a project that's aimed
at bringing "fun" back to programming.
By providing a small and easy to learn
programming language in a friendly and
inviting development environment,
Small Basic makes programming a
breeze. Ideal for kids and adults
alike, Small Basic helps beginners
take the first step into the wonderful
world of programming.
Download and for more information : MS Small Basic v 0.2
When I was really small we were taught things that have similarities to programming but aren't quite programming, games with puzzles to solve, tangrams, and even choose-your-own-adventure writing programs. Later we learned LOGO.
There are some systems like toontalk, but to do anything like programming, you need to cope with sequence - this follows that, follows that, follows that - and basic arithmetic. Which is why 8+.
Younger, you want the children you work with to either have a good sense of what sequence might be - say from following instructions - and to be supported by a good interface, where drag and drop isn't as fiddly as scratch.
RoboMind is a simple educational programming environment with an own scripting language that allows beginners to learn the basics of computer science by programming a simulated robot.
In addition to introducing common programming techniques, it also aims at offering insights in robotics and artificial intelligence. RoboMind is available as stand-alone application for Windows, Linux and Mac OSX. It is free and open source.
Worth to give a try!
www.robomind.net