Real time system concept proof project - real-time

I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.

Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.

Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.

You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!

As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.

Related

What is the best way to start programming with Real Time Linux?

Although I have implemented many projects in C, I am completely new to operating systems. I tried real time linux on Discovery board (STM32) and got the correct results for blinking LED but I didn't really understand the whole process since I just followed the steps and could not find whole description for each step on the internet.
I want to implement scheduling on real time linux. What is the best way to start? Any sites, books, tutorials available?
Complete RTLinux process description will be appreciated.
Thanks in adv.
The transition from "bare metal" to OS based programming is something that I experienced in reverse. I started out a complete software guy, totally into the OS side of things and over time I have moved to the opposite of that (even designing circuits in VHDL!). My advice would be to start simple. Linux is pretty complex, and everywhere you look there are many layers of things all working together to deliver the final product. If you are dead set on a real time linux extension, I'd be happy to suggest https://xenomai.org/ which is a real time extension for linux.
However, to more specifically address your question about implementing scheduling in Linux, you can, but it will be a large amount of work and can be very complicated. The OS uses a completely fair scheduling process ( http://en.wikipedia.org/wiki/Completely_Fair_Scheduler ) and whenever you spin up a thread, it simply gets added to the list to run. This can differ slightly if you implement your code in kernel space as a driver, rely on hardware interrupts, etc., but in general, this is how Linux works. Real time generally means that it has the ability to assign threads one of several different priorities and utilize thread preemption fully at any given time which are concepts that aren't really a part of vanilla Linux. It has some notion of this, but it has limitations that can cause problems when you are looking for real time behavior from Linux.
What may be helpful to you is an RTOS. If you are looking for a full on Real Time Operating System, check out FreeRTOS http://www.freertos.org/ . It has a large community and supports a lot of different devices out of the box with a large amount of example code. They even support your specific board with an example package, so you can give it a shot with nothing to lose! http://www.freertos.org/FreeRTOS-for-Cortex-M3-STM32-STM32F100-Discovery.html . It gives you access to many OS ish constructs like network APIs, memory management, and threading without the overhead and latency of a huge OS. With an RTOS, you create tasks and assign them priorities so you become the scheduler and are no longer at the mercy of the OS. You run the OS, not the OS runs you (if that makes sense). Plus, the constructs offered within an RTOS will feel much like bare metal code and thus will be much easier to follow, understand, and fully learn. It is a more simple world to learn the base building blocks of a full blown OS such as Linux or Windows. If this option sounds good, I would suggest looking through the supported devices on FreeRTOS website and picking one you would like to experiment with and then go for it. I would highly recommend this as a way to learn about scheduling and OS constructs in general as it is as simple as you can get and open source. Once you have the basics of an RTOS down, buying a book about Linux specifically wouldn't be a bad idea. Although there are many free resources on the web related to learning about Linux, they are commonly contradictory, and can be misleading. Pile on learning Linux specific knowledge along with OS in general, and it can feel overwhelming. Starting simpler will help keep you from getting burnt out and minimize the amount of time you spend feeling lost. Linux is definitely a learning process, but like with any learning process, start simple, keep your ultimate goal in mind, make a plan, and take small, manageable steps along that plan until you look up and find yourself exactly where you want to be. Then go tackle the next mountain!
The real-time Linux landscape is quite confusing. 99.99% of the information out there is just plain obsolete.
First, there are lots of "microkernels" that run Linux as one task. (Such as the defunct RTLinux). The problem is that you must write your real-time task to a different API, and can't depend on anything in Linux, because Linux will be frozen in the background while your task runs. So unless your task is dead-simple ("stop the motors when I press this button"), this approach will cause more pain than gain.
Next, there is the realtime Linux patch set. This hasn't been doing so well. because of the next item:
Lastly, the current Linux kernel has gotten rid of the problems that caused people to need realtime in the past. You can even turn off Linux on one of your processors to have full control of the CPU. See also this paper.
To answer your question: I see two different paths you could take:
1) Start with a normal 3.xx Linux kernel and explore the various APIs and realtime techniques (i.e. realtime priorities, memory pinning, etc.) This can get you "close enough" for 99% of what people want "realtime" for. If it's good enough for high frequency trading, it's probably good enough for you.
2) If you have a hard realtime requirement and you are worried that Linux won't cut it, then (as Nick mentioned above), just go buy a processor and write your realtime code with no OS. By splitting up your "realtime" and "non-realtime" code onto different CPUs, you will make the whole system simpler and much more robust.
If you want to learn real-time operating systems then I suggest that you get an FPGA, for example the Altera DE2, and experiment with your own operating system and ucos. You can read a good text about embedded RTOS here.
You could also get a Linux Raspberry and write your own operating system for that.

Extensive comparison between SIMULINK and LabVIEW

I am trying to determine which of these two to buy for my work. I have used SIMULINK but not LabVIEW. Is there anyone who has used both and would like to provide some details? My investigation criteria are the user friendliness, availability of libraries and template functions, real-time probing facility, COTS hardware interfacing opportunity, quality of code generation, design for testability (i.e. ease of generating unit/acceptance tests), etc. However, if anyone would like to educate me with more criteria, please do so by all means!
For anyone who does not know about SIMULINK and LabVIEW - These are both Domain-Specific Languages (DSLs) intended for graphical dataflow modelling (and also code generation). These are multi-industrial tools and quite heavily used for engineering design and modelling.
IMPORTANT - I am quite interested to know if SIMULINK and LabVIEW offer real-time probing. For example, I have a model that I want to simulate. If there are variables associated to certain building blocks in that model, could I view them changing as the simulation continues? I know that it is certainly not possible with SIMULINK as it has a step-by-step debugger. I am not aware of anything similar in LabVIEW.
I really have not used LabVIEW and cannot obtain it temporarily as my work internet has got download restrictions and administrative privilege issues. This is the reason why I simply cannot use only NI website to draw conclusions. If there is any white paper available that addresses this issue, I would also love to know :)
UPDATE SINCE LAST POST
I have used MATLAB code generator and will not say that it is the best. However, I hear now that SIMULINK Embedded Coder is the best code generator and almost one of its own kind. Can anyone confirm whether or not this is good for safety critical system design i.e. generating code from safety-critical subsystem models. I know that the Mathworks is constantly trying to close the gap to achieve fully-flexible production-level C/C++ code generation.
I know that an ideal answer would be,"Depending on what you are trying to do, use a bit of both". And interestingly, I think I am heading to that direction. ATEOTD, it is a lot of money and need to be spent "nicely".
Thanks in advance.
I used labVIEW from 1995, and Simulink from 2000. Now I am involved in control system design, and simulation of robotic systems using labVIEW Real Time and automotive ECUs using MATALAB/Simulink/DSPACE .
LabVIEW is focus on measurement systems, and MATLAB/SIMULINK in dynamic simulation, so,
If you run complex simulations, and your work is create/debug complex simulation models of controllers or plants, use Simulink+RealTimeWorkShop+StateFlowChart. LabVIEW has no eficient code generators for dynamic simulation. RTW generates smaller and fastest code.
If your main work is developing systems with controllers and GUI for machines, or you want to deploy the controllers on field, use labVIEW.
If your main work is developing flexible HIL or SIL systems, with a good GUI, you can use VeriStand. Veristand can mix Simulink and LabVIEW code.
And if you have a big budget ( VERY BIG ) and you are working in automotive control prototypes, DSPACE hardware is a very good choice for fast development of automotive ECUS, or OPAL to develope electric power circuits. But only for prototype or HIL testing of controllers.
From the point of view of COTS hardware:
Mathworks don´t manufacture hardware -> Matlab/Simulink support hardware from several vendors.
National Instruments produce/sell hardware->LabVIEW Real Time is focused in support NationalInstruments hardware. There are no COTS full replacement.
I have absolutely no experience with Simulink, so I'll comment only on LV, although a quick read about Simulink on Wikipedia seems to indicate that it's focused mainly on simulation and modelling, which is certainly not the case with LabVIEW.
OK, so first of all, LV is NOT a DSL. While you wouldn't want to use it for any project, it's a general purpose programming language and you should take that into account. I know that NI has a simulation toolkit for LV, which might help you if that's what you're after, but I have absolutely no experience with it. The images I saw of it seemed to indicate that it adds a special kind of diagram to LV for simulation.
Second, LV is not restricted to any kind of hardware. It's a general purpose language, so you can write code which won't use any hardware at all, code which will use or run on NI's hardware or code which will use any hardware (be it through DLL calls, .NET assemblies, RS232, TCP, GPIB or any other option you can think of). There is quite a large collection of LV drivers for various devices and the quality of the driver usually depends on who wrote it.
Third, you can certainly probe in real time in LV. You write your code, just as you would in C or Java, and when you run it, you have several debugging options:
Single stepping. This isn't actually all that common, partially because LV is parallel.
Execution highlighting. This runs the code in slow motion, while showing all the values in the various wires.
Probes, which show you the last value that each wire had, where wires fill the same function that variables do in text based languages. This updates in real time and I assume is what you want.
Retain wire values, which allows you to probe a wire even after data passed through it. This is similar to what you get in text based IDEs with variables. In LV you don't usually have it because wire values are transient, so the value is not kept around unless you explicitly ask for it.
Of course, since you're talking about code, you could also simply write the code to display the values to the screen on a graph or a numeric indicator or to log them to a file, so there should be no need for actual probing. You could also add analysis code, etc.
Fourth, you could try downloading and running LV in a fully functional evaluation mode. If I remember correctly, NI currently gives you 7 days and then 45 days if you register on their site. If you can't do that on a work computer, you could try at home. If your problem is only with downloading, you could try contacting your local NI office and asking them to send you a DVD.
Note that I don't really know anything about modelling and simulation, so I have no idea what kind of code you would actually have to write in order to do what you want. I assume that if NI has a special module for it, then it's not something that you can completely cover in regular code (at least not if you want the original notation), but I would say that if you could write the code that does what you want in C, there's no reason you shouldn't be able to write it in LV (assuming, of course, that you know how to write code in LV).
A lot of the best answer would have to depend on your ultimate design requirements. Are you developing a product? If so, in what stage of development are you? Or are you doing research?
I recently did a comparison just as you are doing. I know LV, but was wanting to move towards a more hardware-scalable option, since NI HW is very expensive in volume. That is, my company was wanting to move towards a product. What LV and NI HW give you is flexibility. You can change code very quickly compared to C. On the other hand, LV does not run on nearly as many different HW platforms as C. So I wanted to find an inexpensive platform that would work well for real-time control and data acquisition, such that if we wanted to sell a product for, say, $30k, our controller wouldn't be costing $15k of that. We ended up with Diamond Systems Linux SBC's. Interestingly, Simulink ended up using the most expensive hardware! It did have a lot of flexibility, and could generate code, as well as model plants and controllers. But then, LV can do that as well.
As Yair wrote, LV has plenty of good debugging tools. One of the more interesting tools that is not so well known is the Suspend when Called option for a SubVI. This allows you to play with the inputs and outputs of a SubVI as much as you want while execution is paused.
MATLAB and Simulink are the defacto standard for control system design and simulation. Simulink controller models can be used for offline simulation in conjunction with plant models, all the way to realtime implementation on embedded targets. It is a general simulation framework with extensive built-in libraries as well as a la carte special purpose libraries, and can be extended through creation of custom blocks (S-function blocks) in C and other languages. It includes the ability to display values in graphs, numeric displays, gages, etc. while a nonrealtime simulation is taking place. Realtime target support from The Mathworks includes x86 (xPC Target) and several embedded targets (MPC555, etc.), and there is 3rd party support for other targets. The aforementioned dSPACE provides complete prototyping controllers including support for their quite powerful hardware. xPC Target includes support for a plethora of COTS PC data acquisition cards. Realtime target support includes GUI elements such as graphs, numeric displays gages, etc.
As I understand it (I have never really used it in anger), LabView only supports NI hardware, and is more hardware-oriented. Simulink supports hardware from multiple vendors, be it for data acquisition, or real-time implementation, but it may require a bit more work for the user to interface to his or her own hardware (less plug & play than LabView). On the other hand, Simulink provides tools to support the whole model-based design process, from modelling & simulation, control design, verification & validation, code generation, hardware-in-the-loop, etc...
Disclaimer: I used to work for MathWorks.
You guys may really be interested in Control Design adn Simulation Module for LabVIEW. It does a lot of simulations and in the future may be competitive to Simulink. I'm not a control engineer but I use it sometimes for simple testing and I'm glad that I don't have to learn Simulink from the beginning to do some work since I'm familiar with LabVIEW philosophy.

The state of programming and compiling for multicore systems

I'm doing some research on multicore processors; specifically I'm looking at writing code for multicore processors and also compiling code for multicore processors.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
I am aware of the following efforts (some of these don't seem directly related to multicore architectures, but seem to have more to do with parallel-programming models, multi-threading, and concurrency):
Erlang (I know that Erlang includes constructs to facilitate concurrency, but I am not sure how exactly it is being leveraged for multicore architectures)
OpenMP (seems mostly related to multiprocessing and leveraging the power of clusters)
Unified Parallel C
Cilk
Intel Threading Blocks (this seems to be directly related to multicore systems; makes sense as it comes from Intel. In addition to defining certain programming-constructs, it also seems have features that tell the compiler to optimize the code for multicore architectures)
In general, from what little experience I have with multithreaded programming, I know that programming with concurrency and parallelism in mind is definitely a difficult concept. I am also aware that multithreaded programming and multicore programming are two different things. in multithreaded programming you are ensuring that the CPU does not remain idle (on a single-CPU system. As James pointed out the OS can schedule different threads to run on different cores -- but I'm more interested in describing the parallel operations from the language itself, or via the compiler). As far as I know you cannot truly do parallel operations. In multicore systems, you should be able to perform truly-parallel operations.
So it seems to me that currently the problems facing multicore programming are:
Multicore programming is a difficult concept that requires significant skill
There are no native constructs in today's programming languages that provide a good abstraction to program for a multicore environment
Other than Intel's TBB library I haven't found efforts in other programming-languages to leverage the power of multicore architectures for compilation (for example, I don't know if the Java or C# compiler optimizes the bytecode for multicore systems or even if the JIT compiler does that)
I'm interested in knowing what other problems there might be, and if there are any solutions in the works to address these problems. Links to research papers (and things of that nature) would be helpful. Thanks!
EDIT
If I had to condense my question down to one sentence, it would be this: What are the problems that face multicore programming today and what research is going on in the field to solve these problems?
UPDATE
It also seems to me that there are three levels where multicore needs to be concerned:
Language level: Constructs/concepts/frameworks that abstract parallelization and concurrency and make it easy for programmers to express the same
Compiler level: If the compiler is aware of what architecture it is compiling for, it can optimize the compiled code for that architecture.
OS level: The OS optimizes the running process and perhaps schedules different threads/processes to run on different cores.
I've searched on ACM and IEEE and have found a few papers. Most of them talk about how difficult it is to think concurrently and also how current languages don't have a proper way to express concurrency. Some have gone so far as to claim that the current model of concurrency that we have (threads) is not a good way to handle concurrency (even on multiple cores). I'm interested in hearing other views.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
Inertia. (BTW: that's pretty much the answer to all "what does prevent the widespread adoption" questions, whether that be models of parallel programming, garbage collection, type safety or fuel-efficient automobiles.)
We have known since the 1960s that the threads+locks model is fundamentally broken. By ~1980, we had about a dozen better models. And yet, the vast majority of languages that are in use today (including languages that were newly created from scratch long after 1980), offer only threads+locks.
The major problems with multicore programming is the same as writing any other concurrent applications, but whereas before it was uncommon to have multiple cpus in a computer, now it is hard to find any modern computer with only one core in it, so, to take advantage of multicore, multiple cpu architectures there are new challenges.
But, this problem is an old problem, whenever computer architectures go beyond compilers then it seems the fallback solution is to move back toward functional programming, as that programming paradigm, if strictly followed, can make very parallelizable programs, as you don't have any global mutable variables, for example.
But, not all problems can be done easily using FP, so the goal then is how to easily get other programming paradigms to be easy to use on multicores.
The first thing is that many programmers have avoided writing good mulithreaded applications, so there isn't a strongly prepared number of developers, as they learned habits that will make their coding harder to do.
But, as with most changes to the cpu, you can look at how to change the compiler, and for that you can look at Scala, Haskell, Erlang and F#.
For libraries you can look at the parallel framework extension, by MS as a way to make it easier to do concurrent programming.
It is at work, but I recently either IEEE Spectrum or IEEE Computer had articles on multicore programming issues, so look at what IEEE and ACM articles have been written on these issues, to get more ideas as to what is being looked at.
I think the biggest impediment will be the difficulty to get programmers to change their language as FP is very different than OOP.
One place for research besides developing languages that will work well this way, is how to handle multiple threads accessing memory, but, as with much in this area, Haskell seems to be at the forefront in testing ideas for this, so you can look at what is going on with Haskell.
Ultimately there will be new languages, and it may be that we have DSLs to help abstract the developer more, but how to educate programmers on this will be a challenge.
UPDATE:
You may find Chapter 24. Concurrent and multicore programming of interest, http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html
One of the answers mentioned the Parallel Extensions for the .NET Framework and since you mentioned C#, it's definitely something I would investigate. Microsoft has done something interesting things there, though I have to think many of their efforts seem more suited for language enhancements in C# than a separate and distinct library for concurrent programming. But I think their efforts are worth applauding and respect that we're early here. (Disclaimer: I used to be the marketing director for Visual Studio about 3 years ago)
The Intel Thread Building Blocks are also quite interesting (Intel recently released a new version, and I'm excited to head down to Intel Developer Forum next week to learn more about how to use it properly).
Lastly, I work for Corensic, a software quality startup in Seattle. We've got a tool called Jinx that is designed to detect concurrency errors in your code. A 30-day trial edition is available for Windows and Linux, so you might want to check it out. (www.corensic.com)
In a nutshell, Jinx is a very thin hypervisor that, when activated, slips in between the processor and operating system. Jinx then intelligently takes slices of execution and runs simulations of various thread timings to look for bugs. When we find a particular thread timing that will cause a bug to happen, we make that timing "reality" on your machine (e.g., if you're using Visual Studio, the debugger will stop at that point). We then point out the area in your code where the bug was caused. There are no false positives with Jinx. When it detects a bug, it's definitely a bug.
Jinx works on Linux and Windows, and in both native and managed code. It is language and application platform agnostic and can work with all your existing tools.
If you check it out, please send us feedback on what works and doesn't work. We've been running Jinx on some big open source projects and already are seeing situations where Jinx can find bugs 50-100 times faster than simply stress testing code.
The bottleneck of any high-performance application (written in C or C++) designed to make efficient use of more than one processor/core is the memory system (caches and RAM). A single core usually saturates the memory system with its reads and writes so it is easy to see why adding extra cores and threads causes an application to run slower. If a queue of people can pass through a door one a time, adding extra queues will not only clog the door but also make the passage of any one individual through the door less efficient.
The key to any multi-core application is optimization of and economizing on memory accesses. This means structuring data and code to work as much as possible inside their own caches where they don't disturb the other cores with acceses to the common cache (L3) or RAM. Once in a while a core needs to venture there but the trick is to reduce those situations as much as possible. In particular, data needs to be structured around and adapted to cache lines and their sizes (currently 64 bytes) and code needs to be compact and not call and jump all over the place which also disrupts pipelines.
My experience is that efficient solutions are unique to the application in question. The generic guidelines (above) are a basis on which to construct code but the tweak changes resulting from profiling conclusions will not be obvious to those who were not themselves involved in the optimizing work.
Look up fork/join frameworks and work-stealing runtimes. Two names for the same, or at least related, approaches, which is to recursively subdivide large tasks into lightweight units, such that all available parallelism is exploited, without having to know in advance how much parallelism there is. The idea is that it should run at serial speed on a uniprocessor, but get a linear speedup with multiple cores.
Sort of a horizontal analogue of cache-oblivious algorithms if you look at it right.
But i'd say the main problem facing multicore programming is that the great majority of computations remain stubbornly serial. There's just no way to throw multiple cores at those computations and make them stick.

Why people don't use LabVIEW for purposes other than data acquisition and virtualization? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.

Comparison of embedded operating systems?

I've been involved in embedded operating systems of one flavor or another, and have generally had to work with whatever the legacy system had. Now I have the chance to start from scratch on a new embedded project.
The primary constraints on the system are:
It needs a web-based interface.
Inputs are required to be processed in real-time (so a true RTOS is needed).
The memory available is 32MB of RAM and FLASH.
The operating systems that the team has used previously are VxWorks, ThreadX, uCos, pSOS, and Windows CE.
Does anyone have a comparison or trade study regarding operating system choice?
Are there any other operating systems that we should consider? (We've had eCos and RT-Linux suggested).
Edit - Thanks for all the responses to date. A pity I can't flag all as "accepted".
I think it would be wise to evaluate carefully what you mean by "RTOS". I have worked for years at a large company that builds high-performance embedded systems, and they refer to them as "real-time", although that's not what they really are. They are low-latency and have deterministic schedulers, and 9 times out of 10, that's what people are really after when they say RTOS.
True real-time requires hardware support and is likely not what you really mean. If all you want is low latency and deterministic scheduling (again, I think this is what people mean 90% of the time when they say "real-time"), then any Linux distribution would work just fine for you. You could probably even get by with Windows (I'm not sure how you control the Windows scheduler though...).
Again, just be careful what you mean by "Real-time".
It all depends on how much time was allocated for your team has to learn a "new" RTOS.
Are there any reasons you don't want to use something that people already have experience with?
I have plenty of experience with vxWorks and I like it, but disregard my opinion as I work for WindRiver.
uC/OS II has the advantage of being fully documented (as in the source code is actually explained) in Labrosse's Book. Don't know about Web Support though.
I know pSos is no longer available.
You can also take a look at this list of RTOSes
I worked with QNX many years ago, and have nothing but great things to say about it. Even back then, QNX 4 (which is positively chunky compared to the Neutrino microkernel) was perfectly suited for low memory situations (though 32MB is oodles compared to the 1-2MB that we had to play with), and while I didn't explicitly play with any web-based stuff, I know Apache was available.
I purchased some development hardware from netburner
It has been very easy to work with and very well documented. It is an RTOS running uCLinux. The company is great to work with.
It might be a wise decision to select an OS that your team is experienced with. However I would like to promote two good open source options:
eCos (has you mentioned)
RTEMS
Both have a lot of features and drivers for a wide variety of architectures. You haven't mentioned what architecture you will be using. They provide POSIX layers which is nice if you want to stay as portable as possible.
Also the license for both eCos and RTEMS is GPL but with an exception so that the executable that is produced by linking against the kernel is not covered by GPL.
The communities are very active and there are companies which provide commercial support and development.
We've been very happy with the Keil RTX system....light and fast and meets all of our tight real time constraints. It also has some nice debugging features built in to monitor stack overflow, etc.
I have been pretty happy with Windows CE, although it is 'heavier'.
Posting to agree with Ben Collins -- your really need to determine if you have a soft real-time requirement (primarily for human interaction) or hard real-time requirement (for interfacing with timing-sensitive devices).
Soft can also mean that you can tolerate some hiccups every once in a while.
What is the reliability requirements? My experience with more general-purpose operating systems like Linux in embedded is that they tend to experience random hiccups due to their smart average-case optimizations that try to avoid starvation and similar for individual tasks.
VxWorks is good:
good documentation;
friendly developing tool;
low latency;
deterministic scheduling.
However, I doubt that WindRiver would convert their major attention to Linux and WindRiver Linux would break into the market of WindRiver VxWorks.
Less market, less requirement of engineers.
Here is the latest study. The last one was done more than 8 years ago so this is most relevant. The tables can be used to add additional RTOS choices. You'll note that this comparison is focused on lighter machines but is equally applicable to heavier machines provided virtual memory is not required.
http://www.embedded.com/design/operating-systems/4425751/Comparing-microcontroller-real-time-operating-systems