I don't fully understand the complete development cycle and transition from general purpose boards to microcontroller-based serious industrial hardware.
Right now I use RPi or similar general purpose boards and follow this development process:
design hardware with SoC (RPi) in mind.
order/buy hardware
connect main board and peripherals
install OS (almost always Linux)
install libraries, applications, toolchain
create corresponding software with a previously installed toolchain
when the solution is working correctly, move hardware to an appropriate case.
deploy
It may include additional steps but the way I see it, everything is already designed, assembled and test before I even start my development. I only need to choose connect devices, connect wires and create a software. Software is mostly free.
The downside is that such solution lacks quality. I doubt hardware is able to withstand harsh industrial environment. It is also not small enough.
Now I am trying to dive into STM32/Quark/[any microcontroller] world. What I understood so far is:
buy a development board
create software
test
What confuses me is the part when you switch from dev. board to... What?
I mean dev. boards are not designed to be used in a final product, do they?
I guess a need a custom solution.
Do I need to design a custom electronic circuit, produce it by means of an external manufacturer and install my microcontroller and additional ICs there?
I see various presentation's of modern small-size CPUs and I what to know how to develop a device with them.
I want to get an understanding of a full development cycle of an IoT low-power device, but don't know to how to ask correctly.
This isn't really an answer, I don't have enough reputation to simply add a comment, unfortunately. The fact is, answering your question is not simple, there is lot to it. Even after four years of college in Electronic Engineering Technology it was hardly a scratch on what the "real world" is. One learns so much more in the workplace and it never stops.
Anyway, an couple comments.
Microcontrollers are not all equal thus they are not all equally suitable for every task. The development boards and the evaluation boards that are created for microcontrollers are also not all equal and may have focus on applicability to a certain market segment, i.e medical, automotive, consumer IoT, etc..
Long before you start buying a development or evaluation board you have to decide on what is the most appropriate microcontroller. And even, is a microcontroller actually the best choice? ASIC or FPGA? What kind of support chips are needed? How will they interface? Many chip manufactures provide reference designs that can be used a starting point but there are always multiple iterations to actually develop a product. And there is testing, so much testing that we have a "test engineers."
You list development steps is lacking greatly, first and foremost the actual specifications have to be determined for whatever product is being developed and from these specifications appropriate hardware is selected for evaluation. Cost is always a driving factor and so fitting the right device to the product and not going overkill is very important. A lot of time is spent evaluating possible products from their datasheets to determine what products seem to be the right fit. Then there are all the other factors such as the experience with the device/brand/IDE etc. All of that adds to cost of development plus much more.
You mention software(firmware) is free. No, software and firmware are never free. Someone has to develop it and that takes time and time is money. Someone has to debug it. Debugging takes time. Debugging hardware is expensive. Don't forget the cost of the IDE, commercial IDEs are not cheap and some are much more expensive than others and can greatly effect the cost to develop. Compare the cost of buying an IDE to develop for a Maxim Integrated MAXQ MCU to any of the multitude of AVR or ARM IDE choices. Last I checked there were only two companies making an IDE for the MAXQ MCUs. What resources are available to assist in your design you can use with minimal or no licensing fees? This is the tip of the iceberg. There is a lot to it, software/firmware is not "free."
So fast forward a year, you finished a design and it seems to pass all internal testing. What countries are you marketing in? Do you need UL, CE or other certifications? I hope you designed your board to take into account EMI mitigation. Testing that in-house isn't cheap, certification testing isn't either, and failing is even more costly.
These are a very, very, few things that seem to be often ignored by hobbyists and makers thinking they can up with the next best thing and make a killing in some emerging market.
I suggest you do a search on Amazon for "engineering development process", "lean manufacturing", "design for manufacturability", "design for internet of things", "engineering economics" and plan on spending some money to buy some books and time to read up on what the design process is from the various points of view that have to be considered.
Now maybe you mean to develop and deploy for your own use and cost, manufacturability, marketability and the rest are not so important to you. I still suggest you do some Amazon research and pick up some well recommended reading/learning material there that is pertinent to you actual goals. You may want to avoid textbooks, as they generally are more useful when accompanied with class lectures - plus they tend cost much more than the books written for the non-student.
Don't exclude the option of hiring the design and development of an idea out to a firm that specializes in it. It is expensive but is it more expensive than one-up in-house design and development? Probably not. How fast do you actually need your device? Will you lost if someone beats you to market? So many things to consider I could spend hours on this just pointing out things that may, or may not, even be relevant to you depending on what you actual goal is.
TL;DR There is a great deal to the design and development of a product be it marketed to consumers (such as IoT) or to industry. Specifications come first. The exact develop process is going to be influenced by the specifications. Your question cannot be easily answered and certainly not without knowing much more about your end goal. Amazon is a good source of books for really general questions like this.
Is it possible to simulate multi-agent systems in Modelica? I'm talking about a system such MASON written in Java. How easy or difficult it would be?
As I understand, Modelica is not a typical programming language, so would it be particularly helpful or will the basic design of modelica language throw any hindrance? And more importantly, how we're going to model "messaging" systems that's common in Agent-based modeling?
Modelica can simulate discrete event systems. Some libraries exist: ModelicaDEVS, ARENALib etc.
Maybe the syntax is not perfect yet for this "Messaging", but maybe the language will be improved further in this direction.
An advantage might be that real-time capable code can be created, so the agents could run in embedded systems even with hard real-time - only some of the other tools support this like Ptolemy II.
P.S. (added see first comment):
From the start Modelica was designed to create code which is capable to run in real-time. So you could take the unchanged modelica model of your agent connect IO to sensors and actuators and download it on real-time hardware (e.g. PowerPC). Your swarm of agents will then exactly fullfill the time behaviour you modeled and exist in real. Also you could have only one real agent in hardware (maybe this hardware is expensive) and simulate the interaction to all the other agents in real-time on a real-time simulator hardware using your unchanged models for that too.
This is one of the major reasons why Modelica's semantic is not that dynamic as e.g. Java. If you want to run your MASON agent on real hardware you are in trouble: you have to move to e.g. Safety Critical Java, which means that a lot of constructs of your code, but also of standard Java libraries must be rewritten or are not allowed at all. Without this you will have to live with the possibility that your agent will miss his mission and burn down the house ...
As a main project in 5th semester of CS degree I am doing a research on technologies for realizing real-time server2client communication in a multi-user environment. The deciding factors are:
1. Performance
2. Scalability
3. Ease of implementation
4. Portability
5. Architectural flexibility
6. Community support
7. Licensing fees
Now, I could build a chat application with each technology, which I analyze, and get it over with. The problem is that I don't think that such an app would even remotely reach the boundaries of what a certain technology can do.
So my question is: what kind of prototype application could I build to make a good test for Performance and Scalability?
If it's any help, the technologies which I am going to test are: SignalR, Pusher, Pubnub, LightStreamer.
Thank you in advance!
Not a "popular" answer, although:
My experience shows me that each and every case is special.
There is not prototype application for that, except for generic tools like ab (generic to some degree, uh).
For each test you simply have to get the right "ingredients".
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is marked as a subjective question, I hope I won't get too many down votes though.
LV seems to offer a nice graphic alternative to traditional text based programming. As I understand, it's not a just-virtualization/data acquisition programming language. Nonetheless, it seems to have that paradigm pegged to its creator's name.
My question comes up because it doesn't seem to be widely used for multi-purpose applications. I'm not a LV-expert of any kind, I'm more like a learner. I'm still getting used to LV.
Labview is fantastic if you have National Instruments hardware, and want to do something like acquire, plot and log the data.
When you start interfacing to custom devices the wiring between modules gets complicated having to do all the string manipulation work for input and output to a device.
At my place of work, we found that we got annoyed with having to make massive, complicated VI's to interface to devices and started writing them in .NET and interfacing them to Labview.
In the end we ended up scrapping Labview all together and using the NI Measurement Studio for Visual Studio to give us all the lovely looking NI controls (waveform plot, tank, gauges, switches etc) with the flexibility of C#.
In summary, even with a couple of 24" screens, sometimes the wiring for Labview code can get too complex and becomes impossible to comment, debug, and make extensible for any future changes. I suggest taking a look at Measurement Studio for Visual Studio and using your favourite .NET language with the pretty NI controls.
My two experiences with "graphic alternative[s] to traditional text based programming" have been dreadful. I find such languages to be slow to use, hard to edit, and inexpressive. Debugging them is a nightmare. And they offer no real advantages.
To be sure, it has been quite a long time since I looked at one, but the opinions of others I've asked about them have been only luke warm, so I have never taken the time to look again. Reasons to look again are welcome and will be taken on board...
Labview can be used to author large, complex software projects. Labview is unquestionably much more fun to use than a syntax based language. I have programmed mathematically dense, dynamic simulations using labview. Newer versions of Labview include alot of exciting features, especially for utilizing multiple processors. I like Labview very much. But I don't recommend it to anyone.
Unfortunately, it's an absolute nightmare for anything other than simple acquisition and display. It may one day be sufficiently developed to be considered as a viable alternative to text based languages. However, the developers at NI have consistently opted to ignore the three fundamental problems that plague labview.
1) It is unstable and riddled with bugs. There are thousands of bugs that have been posted to the labview support forums that are yet to be fixed. Some of these are quite serious, such as memory leaks, or mathematical errors in basic functions.
2) The documentation is atrocious. More often than not, when you look for help with a labview function in the local help file you'll find a sentence that merely restates the name of the item you are trying to find some detail on. e.g. A user looks up the help file on the texture filter mode setting and the only thing written in the help file is "Texture Filter Mode- selects the mode used for texture filtering." Gee, thanks. That clears things right up, doesn't it? The problem goes much deeper in that; quite often, when you ask a technical representative from national instruments to provide critical details about labview functionality or the specific behavior of mathematical functions, they simply don't know how the functions in their own library work. This may sound like an exaggeration, but trust me, it's not.
3) While it's not impossible to keep graphical code clean and well documented, Labview is designed to make these tasks both difficult and inefficient. In order to keep your code from becoming a tangled, confusing mess, you must routinely (every few operations) employ structures like clusters, and sub-vis and giant type defined controls (which can stretch over multiple screens in a large project). These structures eat memory and destroy performance by forcing labview to make multiple copies of data in memory and perform gratuitous operations- all for the sake of keeping the graphical diagram from looking like rainbow colored spaghetti with no comments or text anywhere in sight. Programming in labview is like playing pictionary with the devil. Imagine your giant software project written as a wall sized flowchart with no words on it at all. Now imagine that all the lines cross each other a thousand times so that tracing the data flow is completely impossible. You have just envisioned the most natural and most efficient way to program in labview.
Labview is cool. Labview is getting better with each new release. If National Instruments keeps improving it, it will be great one day as a general programming language. Right now, it's an extremely bad choice as a software development platform for large or logically complex projects.
I **have been writing in LabVIEW for almost 20 years now. I develop automated test systems. I have developed, RF, Vison, high speed digital and many different flavors of mixed signal test systems. I was a "C" programmer before I switched to LabVIEW.
It's true that you can build some programs quickly in LabVIEW, but just like any other language it takes a lot of training to learn to build a large application that is clean easy to maintain with reusable code. In 20 years I have never had a LabVIEW bug stop me from finishing a project.
Back in the day, NIWEEK would have a software shootout every year. LabVIEW and LabWINDOWS (NI's version of "C") programmers would both be given the same problem and have a race to see which group finished first. Each and every year all the LabVIEW programmers were done way before the 1st LabWINDOWs person finished. I have challenged many of my dedicated text based programming friends to shootouts and they all admit they don't stand a chance, even if I let them define the software problem.
So, I feel LabVIEW is a great programming tool. It's definitely the way to go if you’re interfacing with any type of NI hardware. It's not the answer for everything but I’m sure there are many people not using it just because they don’t consider LabVIEW a “real programming language”. After all, we just wire a bunch of blocks together right? I do find it funny how many text based programmers snub there noses at it as they are so proud of the mess of text code they have created that only they can understand. A good programmer in any language should write code that others can easily read. Writing overly complex code that is impossible to follow does not make the programmer a genius. It means the programmer is a “compliator”(someone who can take a simple problem and complicate it). I believe in the KISS principle (KEEP IT SIMPLE STUPID).
Anyway, there’s my two cents worth!**
I thought LabVIEW was a dream for FPGA programming. Independent executable blocks just... work. In general, I use LabVIEW for various tasks interfacing with my DAQ and FPGA hardware, but that's about it. It seems (again to me) that this is LabVIEW's strong point and the reason it was built, but outside that arena it feels "cumbersome." As far as getting things done, it's like any other language with a learning curve - once you figure it out it's not too bad for getting work done. I've seen several people give up before that thinking the learning curve was permanent or something.
Picking up a 30" monitor made a huge difference.
I know one thing that people dislike is the version control integration.
Edit: LabVIEW/hardware is hella expensive for "just for fun" use. I dropped $10K on their hardware (student prices) and got the software for free from school for making toys around the house.
Our company is using LabVIEW for the last 10 years for measuring, monitoring and reporting of our subject (trains).
Recently we have started using LabVIEW as GUI for databases with lots of data, the powers of LabVIEW with the recent new features (Classes, XControls) allows use to create these kinds of GUIs for a fraction of development costs at other platforms. While we don't need external programmers at consultancy rate.
Ton
I first started using Labview in a college physics lab. Initially, I thought it was slow and cumbersome when compared to other text-based languages. It was too difficult to create complex logic and code became sloppy real fast (wires everywhere).
Then, a few years later, I learned about using sub-vi's and bundles. What a difference! At this point, I was using labview for very high level functions. I was taking raw input from a camera, using all kinds of image filters and processing to ultimately parse out the lines in a road so that a vehicle could drive itself down this road with no driver - it was for the DARPA URBAN CHALLENGE. I was also generating maps from text waypoint data, making high-level parsing functions, and a slew of other applications that had nothing to do with processing data from input devices. It was really a lot of fun. and FAST.
After leaving college, I am now back to using text-based languages. I've been using: PHP, Javascript, VBA, C#, VBscript, VB.net, Matlab, Epson RC+, Codeigniter, various API's, and I'm sure some others. I often get very frustrated in the amount of syntax I have to memorize in order to program with any significant speed. I find it annoying to have to switch schools of thought based on the language I am using... when all programming languages essentially do the same thing! I need a second monitor just to have the help up at all times so i can find the syntax for the same functions in different languages. I miss Labview very much, it's too bad it's so expensive otherwise I would use it for everything.
Graphical based programming I think has a huge potential. By not being constrained by syntax, you can focus on logic instead of code. Labview itself may still be in its infancy in terms of support and debugging, but I believe conceptually it beats out the competition. It's simple a more intuitive way to program.
We use LabVIEW for running our end of line test equipment and it is ideal for data acquisition and control. Typically measuring 15 to 80 differential voltages and controlling environmental chambers, mass flow controllers and various serial devices LabVIEW is more than capable.
Interfacing with custom devices can be simplified greatly by using the NI instrument driver wizard to create reusable VI's, interfacing with custom dll's if needed. On a number of projects we have created such drivers for custom hardware and once created there are reusable in future projects with no modification.
Using event driven structures user interfaces are responsive and we regularly use LabVIEW applications to interface with a database.
Whatever programming environment you choose it's the process of designing the application that matters most. I agree that you can create some really horrible and unreadable block diagrams in LabVIEW but then you can also create unreadable code in Visual studio. With just a little thought and planning a LabVIEW block diagram can be made to fit on a single 24" monitor with plenty of space to add comments.
I would use LabVIEW over Visual Studio for most projects.
But people do use LabView for purposes other than data acquisition and virtualization. Of course LabVIEW is mainly used in labs and production environments because it is (or was) one of the main NI's customer target.
However you can do a lots of various things with LabVIEW, like programming a robot that would perform a lot of image analysis, and then tweet the results. Have a look at videos from NI Week 2009 on you-tube, and you'll see how powerful this tool is. For instance, there is possibility to write code and deploy it to ARM MCUs (see this Dev Monkey article from 2009.08.10).
And finally check this LabVIEW DIY group
I have been using LabVIEW for about two years for developing automation. If given due care and proper design we sure can develop maintainable and really good looking application in LabVIEW.I think this is the same for all the other languages out there. I have seen equally bad code in LabVIEW primarily from people who use it only to develop quick and dirty working automation. IMHO Graphical programming is a lot easier to code and understand if rightly done. But that said I feel text based programming 'feels' more powerful!
LabVIEW is primarily marketed for industrial automation, has inherent support for lot of NI hardware and you can get the third party hardwares working with it pretty quickly. I think that is the reason you see it only in automation field. Moreover it is pretty costly and you are locked down with NI as you do cannot even open your code if you do not buy the software from them!
I've been thinking about this question for decades (yes, since 1989...)
Like all programming languages, LabVIEW is a high-level tool used to manipulate the flow of electrons. Unless you are a purist and refuse to use anything other than a breadboard and wires; transistors, integrated circuits and programming languages are probably a good thing if you wish to build something of any consequence.
But like all high-level tools, just wielding one does not make you a professional craftsman. Back in the day of soldering irons, op-amps and UARTs it required a large amount of careful study before you could create a system that actually functioned. The modern realm of text-based languages is so overly dominated by syntax that the programmer must get it just right before it will compile and run. In order to write code that works, the programmer must increase their skill level to create systems much larger than "Hello World".
LabVIEW is not dominated by syntax, but by Data Flow. Back in the day, reaching for your flow charting template and developing the diagram of a well-balanced information system was the art and beauty part of the job. Only after you had the reviewed flowchart in hand would you even consider slogging through the drudgery of punching out the code. (yes... punch cards)
LabVIEW is a development system that allows the programmer to use flow charting tools to diagram the complete information system and press "run"..... LabVIEW "punches out the code" and compiles it for you. No need to fight through the syntax of text language A or language B.
With such a powerful tool, novices can build large, working programs rapidly -- implying some level of professional craftsmanship since it runs at all. However, if the system does not perform elegantly, or the source code diagram is a mess, it is not the fault of LabVIEW.
People often point to "LabVIEW is only good for developing large data acquisition systems." Perhaps those people should consider the professionalism of the scientists and engineers that are working in data acquisition. If they know enough to get the actual wires right for the sensors and transducers, it may be a good bet that they are expert at developing LabVIEW wiring diagrams as well.
I do use LabView at home, as it is part of Lego Mindstorms, which my son loves. And I really like the way to compose systems like this.
However, in my work (embedded systems), it is generally to restrictive. But also here, I'm trying to move up in abstraction:
- control and state behavior: Model based design (i.e. Rhapsody)
- data algorithms etc. Simulink
Sometimes a graphical model can require more clicks than a piece of code. But this also includes the work a good programmer need to do in design & documentation; not just the code typing. The graphical notation takes many hassles away and is generally much faster if the tool is powerful enough for the complexity at hand. So I expect these kinds of tools will gain more popularity in the next years as they mature and people get familiar with them.
I have used LabView for some 10 years. It's brilliant for Scientific prorgamming ie like Matlab or Simulink but 10 times better. If you are having problems then you are doing something wrong. It takes time to learn like any language. As for using .Net instead - are these people even on the same planet? Why would you go to the trouble of writing eveything from scratch when you can say pull up an FFT etc and use alread written code. .NET is fine for simple programs but not so good for Scientific processing. yes you can do it but not without oodles of add-ons for graphics etc. Prorgamming in G is far easier than text based for Scientific problems. You can of course program in c if you are interfacing and use the dll. Now there are things that I would not use LabView for - speech recognition for example may be a bit messy at present. More to the point though, why do people like programming in outdated text form when there is an easy alternative. It is as if people want to make things complicated so as to justify their job in some way. Simplify Simplify!
Somebody said that LabView is only sued in the Automation field. Simply not write at all. It has applications in Digital Signal Processing,Control Systems,Communications, Web Based,Mathematics,Image Processing and so on. It started as a data aquisition method and they invented the name Virtual Instrumentation but it has gone far beyond that now. It is a Scientific programming language with a second to none graphical interface. It is way beyond Simulink and if you like Matlab then it has a type of Matlab scripting built in for those that like such ways of programming. It is evolving all the time. The one thing I found difficult was writing code for the Compact Rio - tricky but far easier than the alternative. It's expensive but you get a quality product. I personally have not found any bugs in ordinary programming. It is an engineers language but anybody could use it to program.
I'm taking an introductory course (3 months) about real time systems design, but any implementation.
I would like to build something that let me understand better what I'll learn in theory, but since I have never done any real time system I can't estimate how long will take any project. It would be a concept proof project, or something like that, given my available time and knowledge.
Please, could you give me some idea? Thank you in advance.
I programm in TSQL, Delphi and C#, but I'll not have any problem in learning another language.
Suggest you consider exploring the Real-Time Specification for Java (RTSJ). While it is not a traditional environment for constructing real-time software, it is an up-and-coming technology with a lot of interest. Even better, you can witness some of the ongoing debate about what matters and what doesn't in real-time systems.
Sun's JavaRTS is freely available for download, and has some interesting demonstrations available to show deterministic behavior, and show off their RT garbage collector.
In terms of a specific project, I suggest you start simple: 1) Build a work-generator that you can tune to consume a given amount of CPU time; 2) Put this into a framework that can produce a distribution of work-generator tasks (as threads, or as chunks of work executed in a thread) and a mechanism for logging the work produced; 3) Produce charts of the execution time, sojourn time, deadline, slack/overrun of these tasks versus their priority; 4) demonstrate that tasks running in the context of real-time threads (vice timesharing) behave differently.
Bonus points if you can measure the overhead in the scheduler by determining at what supplied load (total CPU time produced by your work generator tasks divided by wall-clock time) your tasks begin missing deadlines.
Try to think of real-time tasks that are time-critical, for instance video-playing, which fails if tasks are not finished (e.g. calculating the next frame) in time.
You can also think of some industrial solutions, but they are probably more difficult to study in your local environment.
You should definitely consider building your system using a hardware development board equipped with a small processor (ARM, PIC, AVR, any one will do). This really helped remove my fear of the low-level when I started developing. You'll have to use C or C++ though.
You will then have two alternatives : either go bare-metal, or use a real-time OS.
Going bare-metal, you can learn :
How to initalize your processor from scratch and most importantly how to use interrupts, which are the fastest way you have to respond to an externel event
How to implement lightweight threads with fast context switching, something every real-time OS implements
In order to ease this a bit, look for a dev kit which comes with lots of documentation and source code. I used Embedded Artists ARM boards and they give you a lot of material.
Going with the RT OS :
You'll fast-track your project, and will be able to learn how to fine-tune a RT OS
You may try your hand at an open-source OS, such as Linux or the BSDs, and learn a lot from the source code
Either choice is good, you will get a really cool hands-on project to show off and hopefully better understand your course material. Good luck!
As most realtime systems are still implemented in C or C++ it may be good to brush up your knowledge of these programming languages. Many realtime systems are also embedded systems, so you might want to play around with a cheap open source one like BeagleBoard (http://beagleboard.org/). This will also give you a chance to learn about cross compiling etc.