How to create and deploy microcontroller-based industrial solutions? - raspberry-pi

I don't fully understand the complete development cycle and transition from general purpose boards to microcontroller-based serious industrial hardware.
Right now I use RPi or similar general purpose boards and follow this development process:
design hardware with SoC (RPi) in mind.
order/buy hardware
connect main board and peripherals
install OS (almost always Linux)
install libraries, applications, toolchain
create corresponding software with a previously installed toolchain
when the solution is working correctly, move hardware to an appropriate case.
deploy
It may include additional steps but the way I see it, everything is already designed, assembled and test before I even start my development. I only need to choose connect devices, connect wires and create a software. Software is mostly free.
The downside is that such solution lacks quality. I doubt hardware is able to withstand harsh industrial environment. It is also not small enough.
Now I am trying to dive into STM32/Quark/[any microcontroller] world. What I understood so far is:
buy a development board
create software
test
What confuses me is the part when you switch from dev. board to... What?
I mean dev. boards are not designed to be used in a final product, do they?
I guess a need a custom solution.
Do I need to design a custom electronic circuit, produce it by means of an external manufacturer and install my microcontroller and additional ICs there?
I see various presentation's of modern small-size CPUs and I what to know how to develop a device with them.
I want to get an understanding of a full development cycle of an IoT low-power device, but don't know to how to ask correctly.

This isn't really an answer, I don't have enough reputation to simply add a comment, unfortunately. The fact is, answering your question is not simple, there is lot to it. Even after four years of college in Electronic Engineering Technology it was hardly a scratch on what the "real world" is. One learns so much more in the workplace and it never stops.
Anyway, an couple comments.
Microcontrollers are not all equal thus they are not all equally suitable for every task. The development boards and the evaluation boards that are created for microcontrollers are also not all equal and may have focus on applicability to a certain market segment, i.e medical, automotive, consumer IoT, etc..
Long before you start buying a development or evaluation board you have to decide on what is the most appropriate microcontroller. And even, is a microcontroller actually the best choice? ASIC or FPGA? What kind of support chips are needed? How will they interface? Many chip manufactures provide reference designs that can be used a starting point but there are always multiple iterations to actually develop a product. And there is testing, so much testing that we have a "test engineers."
You list development steps is lacking greatly, first and foremost the actual specifications have to be determined for whatever product is being developed and from these specifications appropriate hardware is selected for evaluation. Cost is always a driving factor and so fitting the right device to the product and not going overkill is very important. A lot of time is spent evaluating possible products from their datasheets to determine what products seem to be the right fit. Then there are all the other factors such as the experience with the device/brand/IDE etc. All of that adds to cost of development plus much more.
You mention software(firmware) is free. No, software and firmware are never free. Someone has to develop it and that takes time and time is money. Someone has to debug it. Debugging takes time. Debugging hardware is expensive. Don't forget the cost of the IDE, commercial IDEs are not cheap and some are much more expensive than others and can greatly effect the cost to develop. Compare the cost of buying an IDE to develop for a Maxim Integrated MAXQ MCU to any of the multitude of AVR or ARM IDE choices. Last I checked there were only two companies making an IDE for the MAXQ MCUs. What resources are available to assist in your design you can use with minimal or no licensing fees? This is the tip of the iceberg. There is a lot to it, software/firmware is not "free."
So fast forward a year, you finished a design and it seems to pass all internal testing. What countries are you marketing in? Do you need UL, CE or other certifications? I hope you designed your board to take into account EMI mitigation. Testing that in-house isn't cheap, certification testing isn't either, and failing is even more costly.
These are a very, very, few things that seem to be often ignored by hobbyists and makers thinking they can up with the next best thing and make a killing in some emerging market.
I suggest you do a search on Amazon for "engineering development process", "lean manufacturing", "design for manufacturability", "design for internet of things", "engineering economics" and plan on spending some money to buy some books and time to read up on what the design process is from the various points of view that have to be considered.
Now maybe you mean to develop and deploy for your own use and cost, manufacturability, marketability and the rest are not so important to you. I still suggest you do some Amazon research and pick up some well recommended reading/learning material there that is pertinent to you actual goals. You may want to avoid textbooks, as they generally are more useful when accompanied with class lectures - plus they tend cost much more than the books written for the non-student.
Don't exclude the option of hiring the design and development of an idea out to a firm that specializes in it. It is expensive but is it more expensive than one-up in-house design and development? Probably not. How fast do you actually need your device? Will you lost if someone beats you to market? So many things to consider I could spend hours on this just pointing out things that may, or may not, even be relevant to you depending on what you actual goal is.
TL;DR There is a great deal to the design and development of a product be it marketed to consumers (such as IoT) or to industry. Specifications come first. The exact develop process is going to be influenced by the specifications. Your question cannot be easily answered and certainly not without knowing much more about your end goal. Amazon is a good source of books for really general questions like this.

Related

What application can I build to stress-test real-time communication in a multi-user environment?

As a main project in 5th semester of CS degree I am doing a research on technologies for realizing real-time server2client communication in a multi-user environment. The deciding factors are:
1. Performance
2. Scalability
3. Ease of implementation
4. Portability
5. Architectural flexibility
6. Community support
7. Licensing fees
Now, I could build a chat application with each technology, which I analyze, and get it over with. The problem is that I don't think that such an app would even remotely reach the boundaries of what a certain technology can do.
So my question is: what kind of prototype application could I build to make a good test for Performance and Scalability?
If it's any help, the technologies which I am going to test are: SignalR, Pusher, Pubnub, LightStreamer.
Thank you in advance!
Not a "popular" answer, although:
My experience shows me that each and every case is special.
There is not prototype application for that, except for generic tools like ab (generic to some degree, uh).
For each test you simply have to get the right "ingredients".

Should developer tools, languages, frameworks, etc. be standardized across an organization? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
The organization that I currently work for seems to be heading in the direction of dictating to software developers which tools, languages, frameworks, etc. must be used. However, nobody has convinced me that this is a good thing. The main argument I have heard is that it will make training easier. But, after developing software for over 10 years, I've never relied on training to learn how to use an IDE, programming language, or anything else; so I just can't relate.
With the rapid speed at which technology evolves, and the s-l-o-w-n-e-s-s at which I know the standards will adapt, I am concerned that my customers will have requirements that I won't be able to easily implement or won't be able to implement as efficiently as I should. For example, if there is a UI requirement for an auto-complete feature in a web app, and no API has been approved for this yet, I would need to implement auto-complete myself as opposed to using one of the many APIs that provide it out of the box.
A more radical example is if my customers wanted to have Google Wave features. In that case I would want the flexibility of configuring my development environment (including the IDE) and selecting appropriate frameworks (ex: GWT) to use.
Please provide feedback on whether or not you think that software developer tools, languages, etc should be standardized and a few points to support your argument.
There is a lot of benefit for standardization. My organization has fairly set standards on what technology we will use. We realize strong benefits in the following areas ...
Hiring. It is easy to describe what technologies we are looking for and make sure our recruiters are looking for the right people.
License/Software costs. I can buy enterprise licenses easily. It gives me the opportunity to keep costs down by letting me spend more with a smaller number of vendors and thus get more leverage.
Consistency of delivery. Our teams have a very good idea of what projects will take to build, rollout and maintain because they have done it with success before (and they know the pitfalls too).
Agility. I can have one team take over for another or one individual take over for another more easily because of standardization.
Quality. We have peer reviews across teams as well as QA across teams.
Without a consistent use of a technology stack, tools, languages and frameworks, these types of benefits would be more difficult to realize. I am not closed off to new technologies, but there has to be a concrete reason beyond "what if I want to ..."
A major issue with standardization is that once standards are out there, they get stamped in concrete and are difficult to change. This is why our corporate IT environment is stuck on IE 6, and the best change control system we have access to is CVS. Given this situation, some developers break the rules, and some find jobs at more innovative companies.
You have a mixed bag here.
I wouldn't standardize on IDEs, because every developer works differently. Those who are insanely proficient in emacs may see their performance suffer if forced to use Visual Studio. I optimize my Visual Studio experience with a 30" monitor and find it incredibly productive.
However, standardizing on some tools, such as SCons or make or something to build products is perfectly reasonable.
Banning some libraries and having a process where new libraries are either approved or not is also very reasonable. I know lots of companies that ban boost, or JQuery, or ban open source libraries in general, etc. And they had good reasons for doing it. I know I got fairly upset when an intern incorporated some random "security" library he found on the internet without running it by anyone.
In the end every company is different. You have to be standardized enough to avoid serious complications and issues as people come and go, or as new products are formed and organizational structures change. But you have to be flexible enough to avoid re-inventing every wheel you need.
The important thing is to have clear reasons for adopting a certain tool or banning some other tool or library. You can't just have management dictate that thou shalt use this and not that without consulting the engineering team and making the decision for good reasons. And once decisions are made those reasons should be written down and clearly communicated.
And also, if, in the end, your favorite tool or library isn't adopted, please don't whine about it. Be adaptable and do your job, or find a new one that makes you happier.
I once worked for a manager who felt the need to innovate at every level of his software development operation. Every development tool had to be cutting edge (preferably in beta). Many of the tools he asked us to use didn't have good documentation, and training was not available. Ultimately, most of the technology we tried simply didn't work. We wasted a lot of time churning through new technologies, only to dump them when it became clear we couldn't make progress.
I tried to make the case that innovation is perfect in the area where your value proposition lies. Innovation can also be used judiciously where standard techniques fail. But for most mundane tasks, using tried-and-true tools and methods should be the default. Less risk, less cost, less management attention needed. So you can focus time and energy on the areas where innovation has the most benefit.
So I think standardization has an important role. But blindly saying everything must be standard is just as sure to fail as my manager who thought everything must be innovative.
The number one argument in favor of standardization is that it maximizes the ability of the organization as a whole to use a common body of knowledge. Don't know how custom web controls are built in ASP.NET/C#? Ask Bill down the hall who has the knowledge. If you use different tools, such organizational wisdom is cut off at the knees. While it is not good to be restricted to a least common denominator (and hopefully your management will realize this) you should not overlook the benefits of shared experience!
UPDATE: I do not agree that innovation and standardization are polar opposites. Indeed, would we have nearly the level of web innovation if we still had the mishmash of networking standards characteristic of the 1980s? No we would not. Of course, we might have more innovation on new low-level networking protocols but is that really worth it? In its place, we've had an explosion of creativity within the bounds of TCP/IP and the Web standards (http, html, etc.)
The trick is knowing how to standardize without using it as an argument for closing down all new exploration. For example, we use only ASP.NET/C#/SQL Server in my company but I'm perfectly open to the use of new tools within this framework (we recently adopted the DevExpress reporting package, for example, supplanting the earlier standard).
Standardization is a must for a productive development team. However that doesn't mean that you can't revist the standards from time to time to adjust them to new technologies and trends.
Whether you develop operations software for internal clients, or products for external clients, there is no compelling reason not to standardize. You certainly did not give one.
Had you seen how companies are struggling with holding heterogenous products together that have been maintained for 10 years or more, and are now a conglomerate of various technologies that developers at some point thought made sense, you would not have asked this question.
From the top of my head, I could name at least 2 well-known software companies that will be driven out of business because their cost of maintenance has become so high that they can no longer compete (but I won't).
I think the misconception here is that suppressing individualism would supress innovation. That is simply not true. It is poor technical leadership that suppresses innovation.
One unpleasant consequence of standardization is that it tends to stifle innovation.
Innovation is scary. It involves cost and risk.
Standardization is not scary. It reduces cost and risk in the short term. Until your competitors have created a game-changing innovation. Then standardization is very costly.
It depends on the organization I think. One like Microsoft, yes, there should be a bit of a standard. A small business with one IT department, no. A larger business with several offices around the world ... maybe.
it all depends :-P
Assuming the organization has a broad suite of enterprise applications to manage, I'd say no for the following reasons, though I may be taking the message of everything being the same a bit too literally:
Compromise on using best-of-breed for systems, e.g. if all the databases are to be MS-SQL then any Oracle DB solution is thrown out. This would also apply to the fact that everyone using an IDE has to use the same one whether they be doing Data Warehouse report development, web applications, console applications or winForms. I'm thinking of systems like ERP, CRM, SCM, CMS, SSO and various other TLAs, FLAs, and SLAs. (LA = Letter acronyms for a decoding hint if you need it)
Upgrading by committee is another interesting issue. Where if each team can choose their tools and have one person that decides it is to upgrade things, e.g. start using Visual Studio 2008 instead of Visual Studio 2005, now have to determine at what threshold is it worth it to upgrade everyone simultaneously which may be a big headache if there are more than a few developers. For example, over the past 10 years when would there be IDE changes, framework changes, etc.?
Exceptions to the standards. Could a contractor bring in something not used in the organization if they believe it helps them build better software, e.g. Resharper or other add-ons that some contractors believe are very worthwhile that the organization doesn't want to spend the money to get? What about legacy systems that may make the standard become a bit unwieldy, e.g. this was built in ASP.Net 1.1 and so everyone has to have VS 2003 installed even if most will never use it?
Just my thoughts on this.
There are several good reasons to standardize.
First, it allows the enterprise better organizational flexibility, if everybody is more or less familiar with the same things. It also allows people to help each other better. I can't help with problems in the ASP.NET stuff, and there's not all that many people who can help me on the C++ side.
Second, it reduces support problems and expenses. Oracle and SQL Server are both decent products, but using both for similar functions is only going to cause problems. Not to mention that I've been in shops using several widely different platforms to do similar things, and it wasn't fun.
Third, there are some things that just have to be standardized. We couldn't operate half with VS 2005 and VS 2008, since we keep project files under source control. We had to pick a time and convert over.
Fourth, in some businesses, it simplifies the regulatory problems. I don't know what business you're in. I work at a place where we can get away with making mistakes right now, but I've also contracted at a bank and a utility, where it's necessary to be able to show auditors that everything is going in a standard way.
Fifth, it can simplify procurement, if you're dealing with software that costs money.
This doesn't particularly limit us, since if there's something we need that isn't standardized on we just go ahead and get it or do it.
If you want to make a business case against standardization, you'll need to have a business-related argument. Your argument seems to be that you won't be able to implement features the user wants, and that is a consideration. Got another argument?
There's nothing wrong with standardizing on an IDE that is rich enough to be configured for individual developers.
However, do make sure that you don't prevent individual developers from using additional tools, as long as the tools are licensed and that the use of the tool by one developer doesn't require all other developers to use it.
For instance, I happen to use NORMA to help me design databases. The output is SQL Server DDL (or anything else I want). I can make the DDL part of the project without making my NORMA source part of it. Later developers do not need to use NORMA to work on the project.
On the other hand, if I decided to use the Configuration Section Designer to create configuration sections, then future developers would also have to use it. A decision would need to be made about whether to use that tool.
The company I work for uses C#, ASP.NET, JavaScript and generates HTML. The advantages over and above those mentioned above are that there is a perception of improved velocity for maintenance and adaptive changes. The disadvantages include generating some boredom for people who are technically savvy (geeky) and prefer to use a mix and match of languages, depending on what they fancy is better suited, or for 'performance reasons'.
Technical and personal supervision is always good to have when you are developing as fast as you can to meet tight deadlines and competing in a highly saturated market for web development.

Software design period...what do other developers do? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I'm a new software architect/lead, coming up with software design for a team of software developers. I'm coming up with the requirement spec, interface header files, and visio software design docs, and build plan, etc.
My question is: what do the rest of the team do during this period? I'm certainly engaging them in the design, but we dont need the whole team actively working on what I'm doing all the time.
Are there any good books for new software architect?
Generally the various stages overlap, so there will be some coding during design etc. There are a lot of things to do besides that. They can be reviewing unfamiliar technology that is going to be used, setting up source control system, reviewing business requirements, reviewing your documents to make sure they make sense and are clear. There is a lot of other work to be done besides programming.
What a software team does while the lead does the design is very different from company to company. On my company we try to work on the design while the developers are finalizing other projects or solving bugs.
Another approach that I've taken when starting a whole new project is to get the developers to work on the design as well - people with a good understanding of the requirements can help you designing smaller parts of the system and writing the specs for them. Others can work on mockups, frameworks. This worked rather well for the small software team I led in a previous job (4 developers in total).
I also found it useful to have other team members research parts I'm unsure of (or even validating that things I think should work will indeed work), such as:
Investigating whether an external API provides the features we need
Writing a small proof of concept or technology demonstrator
Create an API mockup (header file, interface or REST endpoint) to investigate whether the API looks useful.
As other have said, you typically want a ramp-up period during the first part of the project, and through the first iteration. You're planning on building this iteratively, aren't you? Start with a core team (nor more than 3-4 people, since you're going to need to communicate heavily with each other) to help you explore the requirements, get a basic data model in place, identify and setup any frameworks, identify and setup build and test tools. Some coding activities typically take place in the design phase: for UI mockups, run-ahead prototypes of technically sensitive areas (whatever risks you have should be mitigated by explirative coding: be they new technologies, undocumented interfaces to integrated systems, or unstable requirements).
But coders in the design phase should help with the design, in order to get their buy-in, and to help train up the rest of the team during the first iterations. Your role during this is to ensure that the major nonfunctional requirements (e.g. are known, prioritized, are met by the design, and can be tested). You should also collaborate with the project lead or whoever else is responsible for staffing and financing in order to sketch out the iterations and the staffing levels needed. Ensure the solution can be built iteratively, and aim at implementing only a basic structure during the first iteration, both to build confidence, and to eliminate risks. (Sometimes, you can push major risks to the second iteration, and focus the first towards confidence and team building.)
And of course, be sure you are not designing every detail. You should be able to use every design artifact in the next iteration (and elaborate them later as needed). Since design decisions are expensive to change, try to postpone them. However, some influence the entire solution (for instance, the data model, or your approach to security) and absolutely must be at least outlined up front. This isn't waterfall. This is just not closing your eyes and hoping a viable architecture will emerge by magic.
But design proceeds throughout the iterations. It's just that you do less of it as you go along, and with lesser impact on the solution (unless you're unlucky... and then things get expensive).
Stop doing the useless things you do and just start coding with them! ;)
If there is no overlap with another ongoing project, getting them involved as you're doing is great, maybe push it a little further by having them prototype and present the plus and minus of alternative technologies (APIs, frameworks, libraries, etc...) that your project could use.
As a new software architect, I can recommend some books that helped me understand the role of the architect (but of course not to master it):
Fundamentals of Software Architecture An Engineering Approach:
This book gives good modern overview of software architecture and its many aspects, good place to start if you are a beginner or broaden your knowlage.
Software Architecture in Practice:
Explains what software architecture is, why it's important, and how to design, instantiate, analyze, evolve, and manage it in disciplined and effective ways.
Software Architect's Handbook:
This book takes you through all the important concepts, right from design principles to different considerations at various stages of your career in software architecture. It begins by covering the fundamentals, benefits, and purpose of software architecture.
Clean Architecture: A Craftsman's Guide to Software Structure and Design:
Learn what software architects need to achieve and how to achieve it, master essential software design principles and see how designs and architectures go wrong.
Software Architecture: The Hard Parts:
An advanced architecture book, with this book, you'll learn how to think critically about the trade-offs involved with distributed architectures.
Usually there's another project they can work on, but...
I have my team review the project specs/requirements and put together a basic/preliminary structure to get them already thinking through the application and working out specific questions.
When we convene at the table to discuss the plan they already have an idea of what the project is and requires and in some cases, they present questions I may have missed or overlooked.
Although it's too late now, a good way to approach it is to move the architect over before his current project has ended. Start freeing him up at like 25% then work your way up to 75-100% on the new project a month or two before it starts (maybe more depending on how much analysis and customer interaction there is).
On a trivial project (let's say 2 man-years) it might not be necessary, but anything bigger than that can end up in chaos if somebody doesn't at least get the analysis right before everybody jumps aboard.
If your team does not have any other projects to work on, ask experienced programmers of your your team to come up with at prototype so that you can create a requirement doc according to the needs of the client.
Also programmers novice to the technologies being used in the team could utilize this time to familiarize themselves with the technologies on which your team is going to develop the project.
architect != designer
Chances are that all of your developers can help with the design; let them. Architects don't have to be "lone wolves" and do everything themselves. You lay out the guidelines and the principles and the scaffolding, rough in the wiring, and let your developers flesh out the details - whether it is drawing Visio diagrams or building prototypes to mitigate unknowns/risks.
Migrate towards Agile/XP and away from waterfall methods, and you'll find the team a lot more help.
When making the general design, it's very handy to have programmers create proof-of-concepts. Do that especially with parts of the system that could end up being show stoppers if they don't work in the way you plan to do them, so you can think of alternatives, and adjust the design.
That's going to help you to make the right design-decisions before moving entirely into a certain direction.
Just doing a design, and then moving on and start coding is a sure way to mess up a project. You won't realize that your design is not feasible (or just plain sucks) until you're half-way coding, and by then it's too late to make radical changes.
You'll waste time mitigating non-existing problems during the design, and you'll run into unforeseen problems during implementation.

If I were to build a new operating system, what kind of features would it have? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am toying with the idea of creating an completely new operating system and would like to hear what everyone on this forums take is on that? First is it too late are the big boys so entrenched in our lives that we will never be able to switch (wow - what a terrible thought...). But if this is not the case, what should a operating system do for you? What features are the most important? Should all the components be separate installations (in other words - should the base OS really have no user functionality and that gets added on by creating "plug-ins" kind of like a good flexible tool?)
Why do I want to do this... I am more curious about whether there is a demand and I am wondering, since the OSes we use most today (Linux, Windows, Mac OS X (Free BSD)) were actually written more than 20 years ago (and I am being generous - I mean dual and quad cores did not exist back then, buses were much slower, hardware was much more expensive, etc,...), I was just curious with the new technology if we would do anything differently?
I am anxious to read your comments.
To answer the first question: It's never too late. Especially when it comes to niche market segments and stuff like that.
Second though, before you start down the path of creating a new OS, you should understand the kind of undertaking it is: it'd be a massive project.
Is it just a normal programmer "scratch the itch" kind of project? If so, then by all means go ahead -- you might learn alot of things by doing it. But if you're doing it for the resulting product, then you shouldn't start down that path until you've looked at all the current OSes under development (there are alot more than you'd think at first) and figured out what you'd like to change in them.
Quite possibly the effort would be better spent improving/changing an existing open source system. Even for your own experimentation, it may be easier to get the results you want if you start out with something already in development.
First, a little story. In 1992, during the very first Win32 ( what would become the MS Professional Developers Conference ) conference, I had the opportunity to sit with over some lunch with one Mr. Dave Cutler ( Chief Architect of what most folks would now know as Windows NT,Windows 2000, XP, etc. ).
I was at the time working on the Multimedia group at IBM Boca Raton on what some of you might remember, OS/2. Having worked on OS/2 for several years, and recognizing "the writing on the wall" of where OSes were going, I asked him, "Dave, is Windows NT going to take us into the next century or are there other ideas on your mind ?". His answer to me was as follows:
"M...., Windows NT is the last operating system anyone will ever develop from scratch !". Then he looked over at me, took a sip of his beer, and said, "Then again, you could wake up next Saturday after a particularly good night out with your girl, and have a whole new approach for an operating system, that'll put this to shame."
Putting that conversation into context, and given the fact I'm back in college pursuing my Master's degree ( specializing in Operating Systems design ), I'd say there's TONS of room for new operating systems. The thing is to put things into perspective. What are your target goals for this operating system ? What problem space is it attempting to service ?
Putting this all into perspective will give you an indication of whether you're really setting your sights on an achievable goal.
That all being said, I second an earlier commenters note about looking into things like "Singularity" ( the focus of a talk I gave this past spring in one of my classes .... ), or if you really want to "sink your teeth into" an OS in its infancy....look at "ReactOS".
Then again, WebOSes, like gOS, and the like, are probably where we're headed over the next decade or so. Or then again, someone particularly bright could wake up after a particularly fruitful evening with their lady or guy friend, and have the "next big idea" in operating systems.
Why build the OS directly on a physical machine? You'll just be mucking around in assembly language ;). Sure, that's fun, but why not tackle an OS for a VM?
Say an OS that runs on the Java/.NET/Parrot (you name it) VM, that can easily be passed around over the net and can run a bunch of software.
What would it include?
Some way to store data (traditional FS won't cut it)
A model for processes / threads (or just hijack the stuff provided by the VM?)
Tools for interacting with these processes etc.
So, build a simple Platform that can be executed on a widely used virtual machine. Put in some cool functionality for a specific niche (cloud computing?). Go!
For more information on the micro- versus monolithic kernel, look up Linus' 'discussion' with Andrew Tanenbaum.
I would highly suggest looking at an early version on linux(0.01) to at least get your feet wet. You're going to mucking about with assembly and very obscure low-level stuff to even get started (especially getting into protected mode, multi-tasking, etc). And yes, it's probably true that the "big boys" already have the market cornered. I'm not telling you NOT to do it, but maybe doing some work on the linux kernel would be a better stepping stone.
Check out Cosmos and Singularity, these represent what I want from a futuristic operating system ;-)
Edit :
SharpOS is another managed OS effort. Suggested by yshuditelu
An OS should have no user functionality at all. User functionality should be added by separate projects, which does not at all mean that the projects should not work together!
If you are interested in user functionality maybe you should look into participating in existing Desktop Environment projects such as GNOME, KDE or something.
If you are interested in kernel-level functionality, either try hacking on a BSD derivate or on Linux, or try creating your own system -- but don't think too much about the user functionality then. Getting the core of an operating system right is hard and will take a long time -- wanting to reinvent everything does not make much sense and will get you nowhere.
You might want to join an existing OS implementation project first, or at least look at what other people have implemented.
For example AROS has been some 10 or more years in the making as a hobby OS, and is now quite usable in many ways.
Or how about something more niche? Check out Symbios, which is a fully multitasking desktop (in the style of Windows) operating system - for 4MHz Z80 CPUs (Amstrad CPC, MSX). Maybe you would want to write something like this, which is far less of a bite than a full next-generation operating system.
Bottom line...focus on your goals and even more importantly the goals of others...help to meet those needs. Never start with just technology.
I'd recommend against creating your own Operating System. (My own geeky interruption...Look into Cloud Computing and Amazon EC2)
I totally agree that it would first help by defining what your goals are. I am a big fan of User Experiences and thinking of not only your own goals but the goals of your audience/users/others. Once you have those goals, then move to the next step of how to meet it.
Now days what is an Operation System any way? kernal, Operating System, Virtual Server Instance, Linux, Windows Server, Windows Home, Ubuntu, AIX, zSeries OS/390, et al. I guess this is a good definition of OS... Wikipedia
I like Sun's slogan "the Network is the computer" also...but their company has really fallen in the past decade.
On that note of the Network is the computer... again, I highly recommend, checking out Amazon EC2 and more generally cloud computing.
I think that building a new OS from scratch to resemble the current OSes on the market is a waste of time. Instead, you should think about what Operating System will be like 10-20 years from now. My intuition is that they will be so different as to render them mostly unrecognizable by today's standards. Think of frameworks such as Facebook (gasp!) for models of how future OSes will operate.
I think you're right about our current operating systems being old. Someone said that all operating systems suck. And yes, don't we have problems with them? Call it BSOD, Sad Mac or a Kernel Panic. Our filesystems fail, there are security and reliability problems.
Microsoft pursued interesting approach with its Singularity kernel. It isolates processes in software, using a virtual machine similar to .NET, and formal verification methods. Basically all IPC seems to be formally specified and verified, even before a program is ran.
But there's another problem with it - Singularity is only a kernel. You can't run application not designed for it on it. This is a huge penalty, making eventual transition (Singularity is not public) quite hard. If you manage to produce something of similar technical advantages, but with a real transition plan (think about IPv4->IPv6 problems, or how Windows got so much market share on desktop), that could be huge!
But starting small is not a bad choice either. Linux started just like this, and there are many cases when it leads to better design. Small is beautiful. Easier to change. Easier to grow. Anyway, good luck!
checkout singularity project,
do something revolutionary
I've always wanted an operating system that was basically nothing but a fresh slate. It would have built in plugin support which allow you to build the user interface, applications, whatever you want.
This system would work much like a Lua sandbox to a game would work, minus the limitations. You could build a plugin or module system that would have access to a variety of subsystems that you would use. For example, if you were to write a web browser application, you would need to load the networking library and use that within your plugin script. Need 'security' ? Load the library.
The difference between this and Linux is that, Linux is an operating system but has a windows manager that runs over top of it. In this theoretical operating system, you would be able to implement the generic "look" and "feel" of a variety of windows within the plugin system, or could you create a custom interface.
The difference between this and Windows is that its fully customizable, and by fully I mean if you wanted to not implement any cryptography at all, you can do that, or if you wanted to customize an already existing window, you can do that. Nothing is closed to you.
In this theoretical operating system, there is an OS with a plugin system. The plugin system uses a simple and powerful language.
If you're asking what I'd like to see in an operating system, I can give you a list. I am just getting into programming so I'm not sure if any of this is possible, but I can give you my ideas.
I'd like to see a developed operating system (besides the main ones) in which it ISN'T a pain to get the wireless card to work. That is my #1 pet peeve with most of the ones I've tried out.
It would be cool to see an operating system designed by a programmer for other programmers. Have it so you can run programs for all different operating systems. I don't know if that's possible without having a copy of windows and OSX but it would be really damn cool if I could check the compatablity of programs I write with all operating systems.
You could also consider going with MINIX which is a good starting point.
To the originator of this forum, my hats off to you sir for daring to think in much bolder and idealistic terms regarding the IT industry. First and foremost, Your questions are precisely the kind you would think should engage a much broader audience given the flourishing Computer Sciences all over the globe & the openness taught to us by the Revolutionary Linux OS, which has only begun to win the hearts and minds of so many out there by way of strengthing its user-friendly interface. So kudos on pushing the envelope.
If I'm following correctly, you are supposing that given the fruits of our labor thus far, the development of further hardware & Software concoctions could or at least should be less conventional. The implication, of course, is that any new development would reach its goal faster than what is typical. The prospect, however, of an entirely new OS system #this time would be challenging - to say the least - only because there is so much friction out there already between Linux & Windows. It is really a battle between open source & the proprietary ideologies. Bart Roozendaal in a comment above proves my point nicely. Forget the idea of innovation and whatever possibilities may come from a much more contemporary based Operating System, for such things are secondary. What he is asking essentially is, are you going to be on the side of profit or no? He gives his position away easily here. As you know, Windows is notorious for its monopolistic approach regarding new markets, software, and other technology. It has maintained a deathgrip on its hegemony since its existence and sadly the windows os is racked with endless bugs & backdoors.
Again, I applaud you for your taking a road less travelled and hopefully forgeing ahead and not becoming discouraged. Personally, I'd like to see another OS out there...one much more contemporary.

Comparison of embedded operating systems?

I've been involved in embedded operating systems of one flavor or another, and have generally had to work with whatever the legacy system had. Now I have the chance to start from scratch on a new embedded project.
The primary constraints on the system are:
It needs a web-based interface.
Inputs are required to be processed in real-time (so a true RTOS is needed).
The memory available is 32MB of RAM and FLASH.
The operating systems that the team has used previously are VxWorks, ThreadX, uCos, pSOS, and Windows CE.
Does anyone have a comparison or trade study regarding operating system choice?
Are there any other operating systems that we should consider? (We've had eCos and RT-Linux suggested).
Edit - Thanks for all the responses to date. A pity I can't flag all as "accepted".
I think it would be wise to evaluate carefully what you mean by "RTOS". I have worked for years at a large company that builds high-performance embedded systems, and they refer to them as "real-time", although that's not what they really are. They are low-latency and have deterministic schedulers, and 9 times out of 10, that's what people are really after when they say RTOS.
True real-time requires hardware support and is likely not what you really mean. If all you want is low latency and deterministic scheduling (again, I think this is what people mean 90% of the time when they say "real-time"), then any Linux distribution would work just fine for you. You could probably even get by with Windows (I'm not sure how you control the Windows scheduler though...).
Again, just be careful what you mean by "Real-time".
It all depends on how much time was allocated for your team has to learn a "new" RTOS.
Are there any reasons you don't want to use something that people already have experience with?
I have plenty of experience with vxWorks and I like it, but disregard my opinion as I work for WindRiver.
uC/OS II has the advantage of being fully documented (as in the source code is actually explained) in Labrosse's Book. Don't know about Web Support though.
I know pSos is no longer available.
You can also take a look at this list of RTOSes
I worked with QNX many years ago, and have nothing but great things to say about it. Even back then, QNX 4 (which is positively chunky compared to the Neutrino microkernel) was perfectly suited for low memory situations (though 32MB is oodles compared to the 1-2MB that we had to play with), and while I didn't explicitly play with any web-based stuff, I know Apache was available.
I purchased some development hardware from netburner
It has been very easy to work with and very well documented. It is an RTOS running uCLinux. The company is great to work with.
It might be a wise decision to select an OS that your team is experienced with. However I would like to promote two good open source options:
eCos (has you mentioned)
RTEMS
Both have a lot of features and drivers for a wide variety of architectures. You haven't mentioned what architecture you will be using. They provide POSIX layers which is nice if you want to stay as portable as possible.
Also the license for both eCos and RTEMS is GPL but with an exception so that the executable that is produced by linking against the kernel is not covered by GPL.
The communities are very active and there are companies which provide commercial support and development.
We've been very happy with the Keil RTX system....light and fast and meets all of our tight real time constraints. It also has some nice debugging features built in to monitor stack overflow, etc.
I have been pretty happy with Windows CE, although it is 'heavier'.
Posting to agree with Ben Collins -- your really need to determine if you have a soft real-time requirement (primarily for human interaction) or hard real-time requirement (for interfacing with timing-sensitive devices).
Soft can also mean that you can tolerate some hiccups every once in a while.
What is the reliability requirements? My experience with more general-purpose operating systems like Linux in embedded is that they tend to experience random hiccups due to their smart average-case optimizations that try to avoid starvation and similar for individual tasks.
VxWorks is good:
good documentation;
friendly developing tool;
low latency;
deterministic scheduling.
However, I doubt that WindRiver would convert their major attention to Linux and WindRiver Linux would break into the market of WindRiver VxWorks.
Less market, less requirement of engineers.
Here is the latest study. The last one was done more than 8 years ago so this is most relevant. The tables can be used to add additional RTOS choices. You'll note that this comparison is focused on lighter machines but is equally applicable to heavier machines provided virtual memory is not required.
http://www.embedded.com/design/operating-systems/4425751/Comparing-microcontroller-real-time-operating-systems