What are some new technical developments in operating systems? - operating-system

I need to investigate some new technical developments in operating systems. What would be some interesting developments to write about?
I am looking at any interesting development within the last 5 years and have only come across Azure Sphere
I expect to write about 3 or 4 examples of new technical developments in operating systems

I've spent almost 20 years looking for something that's truly new (for operating systems); and the only thing that I can think of that's recent is the Spectre (and Meltdown) security disaster and associated mitigations.
Almost everything else is the same old ideas regurgitated/reimplemented with fresh marketing hype. For an example (your example) consider Azure Sphere - a low budget/low effort attempt at recycling an existing kernel (Linux, which itself is/was a reimplementation of a crusty lump of memorabilia from late 1960s) where the main technical achievement is slapping on a coat of paint in an attempt to convince suckers that a design originally intended for "dumb terminals connected to mainframe" actually makes sense for modern embedded "Internet of Things" purposes (it's like gluing a muffler on a horse and pretending it's a "new" kind of motorcycle).
Note that there are things that seem "new" but have nothing to do with the underlying OS. One example is augmented reality (e.g. hololens), which is new for user-space, but barely more than a few tweaks to a few APIs as far as the OS itself is concerned (if I remember right, Microsoft just use Windows10, which is mostly just an evolution of ideas originating from Windows NT from the 1990s, if not earlier).
Also note that a major part of the problem is that anything that is actually new breaks compatibility, so it either dies or gets nerfed until it's the same old stuff we've seen hundreds of times before. An example of this is "The Machine" (from HP) - a bunch of enthusiastic goals reduced to "Oops, we're repurposing that and, um, using Linux. None of the things that made it interesting survived. Sorry". The other part of the problem is difficulty.
Of course this shouldn't be unexpected. When any new technology (fire, wheels, combustion engines, electricity, ..) arrives you get a period of "pioneering" where new ideas and breakthroughs are frequent; but then the technology matures and these things become infrequent.
So...
For some advice that might be useful in practice (assuming this is a university assignment); "waffling" is a valuable skill to learn. Take anything that might be vaguely related and stretch the truth. Mumble. Be vague where it matters and provide unnecessary detail where it doesn't matter. Use words like "instantiationalising" to make your professors eye's glaze over so they can't find any meaning hidden by your words (and let them assume it's their fault for being too busy/distracted to understand, and not your fault). For help, try to read research papers in entirety and then see if you can figure out why it's impossible for anyone to remember anything more than "introduction and conclusion". With practice, a skilled waffler can fill 10+ pages without actually saying anything at all.

Related

How to motivate team to work on legacy products

We are a team working on legacy code which is pretty old and written in languages of initial programming days. As the team members are trained in latest technology and are now put to work on legacy code, they are not happy. How to motivate them to work in legacy code also?
Send your team to meet users and watch them use software. They should find out what are most critical problems users have with that software.
Getting to know users makes work more real - your team will know that adding new functionality or eliminating some bugs will help some real person. That should motivate programmers to get boring job done.
Only cash can not make the developers happy. You should provide them good environment so that they can pay attention to their work.
Another thing is no technology is bad OR legacy OR older. Thing is if your company needs to maintain it, then you must keep it going. But keep all standards for designing, coding, testing, code review, interactive sessions etc..
Also you can motivate them to convert your legacy code to some new platform for better performance and maintainability. Every company does that even once i think, because they want to compete with other market products.
Also provide them some cool sessions for other technologies which are used in your company but they don't know or use. Let them be deeply in the things, give them proper time and support for problem solving. Main goal is to deliver on time with less rework and bugs.
Provide some rewards towards their work and keep them happy about their work.
thanks.
i really like "Send your team to meet users and watch them using software"
If i have to motivate my team, I will really ask my developer to visit the use and find out how much user is happy with the product.
I will really like to take challenge on how we can make it more better then what exist.
Do you have some scope for retiring the legacy code in the foreseeable future? If so, "we only need to keep this going until..." might sweeten the pill.
Are the team members experienced in the languages/environments which the legacy code is written in? If not, it might be simple reluctance to do something that they don't know how to operate. Possibly scheduling in some time for them to gain at least a passing familiarity might be in order; provided it's not too much of a paradigm shift from latest technology, it shouldn't be all that hard?
Are the team members only allowed to work on the legacy code team or can their time be split between different projects? I don't think anyone is going to be happy about spending a 40 hour week on FORTRAN debugging. But if you have to spend a few hours on the legacy code knowing that you can take breaks during the day to work on something you actually enjoy it's a little less painful.
And I'll reiterate what was said before about making sure the team members have time to learn and gain experience with the old technologies before throwing them in there. Try to make the training enjoyable too. Our legacy code training was set up as a competition to see who could come up with the fastest/shortest/most complete/etc solution to interesting problems rather than having to look exclusively at the code we were to be working on. Really, that could be applied to the team's plan even if you don't have time set aside for training. Add a little competition to the task at hand or allow a little bit of time for challenging and competitive side projects.
How are they getting rewarded for working on these legacy products? Do you know what motivates them? Some people may prefer timely recognition and praise while others may expect cash or understanding that this isn't necessarily what they signed up for when they initially took the job. I'd be tempted to suggest having 1:1 meetings to see what would they like that would make them happier. Is it more money? More flexibility in time off? Training in the legacy technologies? Affirmation that they are doing good work on these ancient systems, as the initial programming days makes me think of mainframes and other really old tools that one may wonder, "How much longer will this run really?"
Cash isn't the answer. Free food, soft drinks, whatever, that only goes so far at alleviating the drudgery of legacy code work. What about trying to change their perspective?
"Anyone can do good work with modern code that has a nice IDE with refactoring built in, a ton of resources just one Google search away, but we proud few, we band of brothers, we are good enough to do this with ancient procedural languages. We'll tame this awful mess of code and do it with one hand behind our backs and create processes and tools to make sure the next poor bastards won't have it so bad."
I would say the simplest way to attract the most positive emotion from developers to legacy coding would be to make the old new in some way.
Have a session or two to identify what it is that the legacy code does, and then get an idea of what it would take to do it anew on a new architecture. The "new architecture" part is key, because 9/10 times, it's the architecture that is dreaded (spaghetti code, pre-standard conventions, etc...).
If you can't get your re-write estimates approved, then at least work out a plan to get your refactoring of legacy code into the daily maintenance. At the very least your developers will feel as though they are working towards something, and something new at that, instead of just monkey-wrenching the old decay that no one wants to even remember.
Just my 2ยข.
You can for example try to do fancy things on the testing side. Try out mocking frameworks etc.
Try also emphasize that handling legacy code is a good experience if you want to become a solid programmer, since every technology becomes eventually legacy.
Extra cash? :) Don't know anything else...
Even if it's new technologies legacy code it's not always a pleasure to work on such code, so on "initial technologies"... i guess the only motivating thing is to discover how programming was these days...
The amount of Time required in spending motivating the team and learning the legacy code and fixing it half heartedly can easily be used to build the same stuff in new platform , given the amount of resources , IDEs , expertise , frameworks etc available for free, the Good news you have the system in place you just need to meet the same behavior in new platform unlike we have to build something new for some product whose behavior and user experience we dont know .

Encouraging good development practices for non-professional programmers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
In my copious free time, I collaborate with a number of scientists (mostly biologists) who develop software, databases, and other tools related to the work they do.
Generally these projects are built on a one-off basis, used in-house, and eventually someone decides "oh, this could be useful to other people," so they release a binary or slap a PHP interface onto it and shove it onto the web. However, they typically can't be bothered to make their source code or dumps of their databases available for other developers, so in practice, these projects usually die when the project for which the code was written comes to an end or loses funding. A few months (or years) later, some other lab has a need for the same kind of tool, they have to repeat the work that the first lab did, that project eventually dies, lather, rinse, repeat.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Similarly, any advice on how to communicate the idea that version control, bug tracking, refactoring, automated tests, continuous integration and other common practices we professional developers take for granted are good ideas worth spending time on?
Unfortunately, a lot of scientists seem to hold the opinion that programming is a dull, make-work necessary evil and that their research is much more important, not realising that these days, software development is part of scientific research, and if the community as a whole were to raise the bar for development standards, everyone would benefit.
Have you ever been in a situation like this? What worked for you?
Software Carpentry sounds like a match for your request:
Overview
Many scientists and engineers spend
much of their lives programming, but
only a handful have ever been taught
how to do this well. As a result, they
spend their time wrestling with
software, instead of doing research,
but have no idea how reliable or
efficient their programs are.
This course is an intensive
introduction to basic software
development practices for scientists
and engineers that can reduce the time
they spend programming by 20-25%. All
of the material is open source: it may
be used freely by anyone for
educational or commercial purposes,
and research groups in academia and
industry are actively encouraged to
adapt it to their needs.
Let me preface this by saying that I'm a bioinformatician, so I see the things you're talking about all the time. There's some truth to the fact that many of these people are biologists-turned-coders who just don't have the exposure to best practices.
That said, the core problem isn't that these people don't know about good practices, or don't care. The problem is that there is no incentive for them to spend more time learning software engineering, or to clean up their code and release it.
In an academic research setting, your reputation (and thus your future job prospects) depends almost entirely on the number and quality of publications that you've contributed to. Publications on methods or new algorithms are not given as much respect as those that report new biological findings. So after I do a quick analysis of a dataset, there's very little incentive for me to spend lots of time cleaning up my code and releasing it, when I could be moving on to the next dataset and making more biological discoveries.
I'll also note that the availability of funding for computational development is orders of magnitude less than that available for doing the biology. In a climate where only 10% of submitted grants are getting funded, scientists don't have the luxury of taking time to clean and release their code, when doing so doesn't help them keep their lab funded.
So, there's the problem in a nutshell. As a bioinformatician, I think it's perverse and often frustrating.
That said, there is hope for the future. With second-and-third generation sequencing, in particular, biology is moving into the realm of high-throughput discovery, where data mining and solid computational pipelines become integral to the success of the science. As that happens, you'll see more and more funding for computational projects, and more and more real software engineering happening.
It's not exactly simple, but demonstration by example would probably drive the point home most effectively - find a task the researcher needs done, find someone who did take the time to make a tool w/source available, and point out how much time the researcher could save as a result due to having that tool available - then point out that they could give back to the community in the same fashion.
In effect, what you are asking them to do is become professional developers (with their copious free time), in addition to their chosen profession. Their reluctance is understandable.
Does anyone have any suggestions for how to persuade people whose primary job isn't programming that it's of benefit to their community for them to be more open with the tools they've built?
Give up. Seriously, this is like teaching a pig to sing. (I can say this because I used to be a physicist so I know what they're like.)
The real issue is that your colleagues are rewarded for scientific output measured in publications, not software. It's hard enough in computer science to get recognized for building software; in the other sciences, it's nearly impossible.
You can't sell good development practice to your biology friends on the grounds that "it's good for you." They're going to ask "should I invest effort in learning about good software practice, or should I invest the same effort to publish another biology paper?" No contest.
Maybe framing it in terms of academic/intellectual responsibility would help, to a degree - sharing your source is, in many ways, like properly citing your sources or detailing your research methodology. There are similar arguments to be made for some of the "professional software developer" behaviors you'd like to encourage, though I think releasing the code is probably an easier sell on these grounds than other things which could require significantly more work.
Actually, asking any busy project team to include in their schedule time for making their software suitable for adoption by another team is extremely hard in my experience.
Doing extra work for the public good is a big ask.
I've seen a common pattern of "harvesting" after the project is complete, reflecting that immediate coding for reuse tends to get lost in the urgency of the day.
The only avenue I can think of is if the reuse is within an organisation with a budget for a "hunter gatherer", someone whose reason for being there is IT.
You may be on more of a win for things such as unit tests because they have immediate payback for the development.
For one thing, could we please stop teaching biologists Perl? Teaching non-professional programmers a write-only language is practically guaranteed to lead to unmaintainable, throw-away code. Python fills the same niche, is just as easy to learn (it's even used to teach kids programming!), and is much more readable.
Draw parallels with statistics. Stats is a crucial part of scientific research, and one where the only sensible advice is: either learn to do it properly, or get an expert to do it for you. Incorrectly-done stats can completely undermine a paper, just as badly-written code can completely undermine a public database or web resource.
PS: This blog is very good, but getting them to read it will be an uphill struggle: Programming for Scientists
Chris,
I agree with you to a degree, but in my experience what ends up happening is that in their eagerness to publish you end up with too many "me too" codes and methods, which don't really add to the quality of science. If there was a little more thought about open sourcing code and encouraging others to contribute (without necessarily getting publications out of it) then everyone would benefit.
Definitely agree that a separation between the scientific programmers and the software engineers is a good thing, especially for production applications. But even for scientific programming, the quality of my code would have been so much better if I had followed good practices at the time.
In my experience the best way of getting people to program cleanly is to show a good example when you're working with them.
eg: "I never spend hopeless days debugging my code because the first things I code are automated unit tests that will pinpoint problems when they are small and easily detectable"
or: "I'm very bad at keeping track of versions of things, but sometimes my new code does break what did work before. So I use svn/git/dropbox to keep track of things for me"
In my experience that kind of statement can raise the interest of "biologists that learned how to script".
And if you need to collaborate on a bigger project, make it clear that you have more experience and that everything will go more smoothly if things are done your way.
Regarding publication of code, current practice is indeed frustrating. I would like to see a new journal like Source Code for Biology and Medecine, where code is peer-reviewed and can be published, but that has no (or very low) publication costs. Putting code on sourceforge or others is indeed not "scientifically worth it" because it doesn't make a line on your publication list, and most code is not revolutionary enough to warrant paying $1,000 for publication in Source Code for Biology and Medecine or PLoS One...
You could have them use a content management system, like Joomla. That way they only push content and not code.
I wouldn't so much persuade as I would streamline the process. Document it clearly, make video tutorials and bundle some kind of tool chain that makes it ridiculously easy to get source repositories set up without requiring them to become experts in something that isn't their main field.
Take a really good programmer who already knows best practices, ask your scientists to teach him what they need and what they do, eventually the programmer will have minimum domain knowledge (I suspect it takes between 1 and 3 years depending on the domain) to do what scientists asks for.
Developers always learn another domain of competency, because most of their programs are not for developers, so they need to know what the "client" do.
To be devil's advocate, is teaching scientists to be good software engineers the right thing to do? Software in research is usually very purpose specific - sometimes to the point where a piece of code needs to run successfully only once on a single data set. The results then feed into a publication and the goal is met. And there's a high risk that your technique or algorithm will be superseded by a better one in short order. So, there's a real risk that effort spent producing sparkling code will be wasted.
When you're frustrated by wading through a swamp of ill-formed perl code, just think that the code you're looking at is one of the rare survivors. Mountains of such code has been written, used a few times, then discarded never to see the light of day again.
I guess I'm just saying there's a big place in research for smelly heinous one-off prototype code. There are good reasons why such code exists. It may not be pretty, but if it gets the job done, who cares? We can always hire a software engineer to write the production-ready version later, IF it turns out to be justified, and let our scientists move on.

Frameworks & Doneworks: Do you thinks it is cool to use those?

By using somebody else's works you advertize the authors of those works (At least, among other programmers). Do you think it is cool?
This line of questioning could go up one more level and become "Programming Languages: Do you think it's cool to use those?" Because someone(s) wrote those too. I can continue this up to the types of computers, to the components, etc...
Monet did not make the brushes or the paint or the canvas (well maybe, not sure). But who creates those building blocks is not quite what stands out at the end.
Languages/Frameworks/etc were built and released to be utilized by the masses (or make money for the creators).
I think it's always cool. Be more efficient, reduce redundancy, promote other useful code.
If you're trying to learn though, reading and understanding the framework you're using is very helpful. There are always other things you can be programming and learning, not necessarily reinventing the wheel.
If using their work has saved you time reimplementing the same thing (but with more bugs) then don't they deserve credit?
Or put another way, stealing other peoples' work without credit (or paying them, depending on whether we're talking about free or commercial software here) isn't cool.
Of course, nobody's stopping you from writing your own framework, if that's what you want to do...
It depends on what kind of programming you're doing.
Are you doing it to achieve a finished program? Then a framework could save you a lot of time.
Are you doing it to create something truly original? Then a framework might simply tie you into an existing way of thinking.
Rembrandt made his own paints. Michelangelo selected his own marble from the quarry. Alan Kay said "People who are really serious about software should make their own hardware". The Excel team famously has their own compiler. The iPhone ain't just an alternate firmware for the Blackberry. ISTM if you want to be at the very top of your game, you've got to get down and dirty with the nitty gritty of it.
I don't know anything about advertising, other programmers, or what's "cool", so I can't respond to those parts of your question.

What inherited code has impressed or inspired you?

I've heard a ton of complaining over the years about inherited projects that us developers have to work with. The WTF site has tons of examples of code that make me actually mutter under my breath "WTF?"
But have any of you actually been presented with code that made you go, "Holy crap this was well thought out!" or "Wow, I never thought of that!"
What inherited code have you had to work with that made you smile and why?
Long ago, I was responsible for the Turbo C/C++ run-time library. Tanj Bennett wrote the original 80x87 floating point emulator in 16-bit assembler. I hadn't looked closely at Tanj's code since it worked well and didn't require attention. But we were making the move to 32-bits and the task fell to me to stretch the emulator.
If programming could ever be said to have something in common with art this was it.
Tanj's core math functions managed to keep an 80-bit floating point temporary result in five 16-bit registers without having to save and restore them from memory. X86 assembly programmers will understand just what an accomplishment this was. Register space was scarce and keeping five registers as your temp while simultaneously doing complex math was a beautiful site to behold.
If it was only a matter of clever coding that would have been enough to qualify it as art but it was more than that. Tanj had carefully picked the underlying math algorithms that would be most suitable for keeping the temp in registers. The result was a blazing-fast floating point emulator which was an important selling point for many of our customers.
By the time the 386 came along most people who cared about floating-point performance weren't using an emulator but we had to support Intel's 386SX so the emulator needed an overhaul. I rewrote the instruction-decode logic and exception handling but left the core math functions completely untouched.
In my first job, I was amazed to discover a "safe ID" class in the codebase (c++), which was wrapping numerical IDs in a class templated with an empty tag class, that ensured that the compiler would complain if you tried for example to compare or assign a UserId into an OrderId.
Not only did I made sure that I had an equivalent Id class in all subsequent codebases I would be using, but it actually opened my eyes on what the compiler could do to guarantee correctness and help writing stronger code.
The code that impresses me the most, and which I try to emulate - is code that seems too simple and easy to understand.
It is damn difficult to write that kind of code. :-)
I have a funny story to tell here.
I was working on this Javaish application, filled with getters & setters that did nothing but get or set and interfaces and everything ever invented to make code unreadable. One day I stumbled upon some code which seemed very well crafted -- it was basically an algorithm implementation that looked very elegant = few lines of readable code, even though it respected every possible rule the project had to adhere to (it was checkstyled automatically).
I couldn't figure out who on the team could have written such code. I was dying to discuss with him and share thoughts. Thankfully, we had switched to subversion (from cvs) a few months earlier and I quickly ran am 'svn blame'. I loled all over the place, seeing my name next to the implementation.
I had heard stories about people not remembering code they wrote 6 months back, code that is a nightmare to maintain. I could not believe such a thing could happen: how can you forget code you wrote? Well, now I'm convinced it can happen. Thankfully the code was alright and easy to extend, so I've only experienced half of the story.
Some VB6 code by another programmer at my company I came across that handled the error conditions very well (whether it be deal with them directly or log them).
Along with some rather complex code that was well commented.
I know this will bring a lot of answers like,
"I've never find good code before I step in" and variations.
I think the real problem there is not that there isn't good coders or excellent projects out there, is that there's an excess of NIH syndrome and the fact that no body likes code from others. The latter is just because you have to make an intellectual effort to understand it, a much bigger effort than you need to understand you own code so that you dislike it (it's making you think and work after all).
Personally I can remember (as everyone I guess) some cases of really bad code but also I remember some pretty well documented, elegant code.
Currently, the project that most impressed me was a very potent, Dynamic Workflow Engine, not only by the simplicity but also for the way it is coded. I can remember some very clever snippets here and there, as well as a beautiful metaprogramming library based on a full IDL developed by some friends of mine (Aspl.es)
I inherited a large bunch of code that was SO well written I actually spent the $40 online to find the guy, I went to his house and thanked him.
I think Rocky Lhotka should get the credit, but I had to touch a CSLA.NET application recently {in my private practice on the side} and I was very impressed with the orderliness of the code. The app worked extremely well, but the client needed a few extensions. The original author had died tragically, and the new guy was unsophisticated. He didn't understand CSLA.NET's business object based approach, and he wanted to do it all over again in cut-and-dried VB.NET, without any fancy framework.
So I got the call. Looking at a working example of WinForm binding and CSLA.NET was pretty instructive about a lot of things.
Symbian OS - the old core bit of it anyway, the bit that dated back to the Psion days or those who even today keep that spirit alive.
And sitting right along side it and all over it is all the new crap created by the lowest bidders hired by the big phone corporations. It was startling, you could actually feel in your bones whether a bit of the code-base was old or new somehow.
I remember when I wrote my bachelor thesis on type inference, my Pascal-to-Pascal 'compiler' was an extension of a Parser my supervisor programmed (in Java). It had a pretty good structure as far as I can remember, and for me who had never done any serious Object-oriented programming, it was quite a revelation.
I've been doing a lot of Eclipse plug-in development and often had to debug into the actual Eclipse source code. While I haven't "inherited" it in the sense that I'm not continuing work on it, I've always been impressed with the design and quality of the early core.

If I were to build a new operating system, what kind of features would it have? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am toying with the idea of creating an completely new operating system and would like to hear what everyone on this forums take is on that? First is it too late are the big boys so entrenched in our lives that we will never be able to switch (wow - what a terrible thought...). But if this is not the case, what should a operating system do for you? What features are the most important? Should all the components be separate installations (in other words - should the base OS really have no user functionality and that gets added on by creating "plug-ins" kind of like a good flexible tool?)
Why do I want to do this... I am more curious about whether there is a demand and I am wondering, since the OSes we use most today (Linux, Windows, Mac OS X (Free BSD)) were actually written more than 20 years ago (and I am being generous - I mean dual and quad cores did not exist back then, buses were much slower, hardware was much more expensive, etc,...), I was just curious with the new technology if we would do anything differently?
I am anxious to read your comments.
To answer the first question: It's never too late. Especially when it comes to niche market segments and stuff like that.
Second though, before you start down the path of creating a new OS, you should understand the kind of undertaking it is: it'd be a massive project.
Is it just a normal programmer "scratch the itch" kind of project? If so, then by all means go ahead -- you might learn alot of things by doing it. But if you're doing it for the resulting product, then you shouldn't start down that path until you've looked at all the current OSes under development (there are alot more than you'd think at first) and figured out what you'd like to change in them.
Quite possibly the effort would be better spent improving/changing an existing open source system. Even for your own experimentation, it may be easier to get the results you want if you start out with something already in development.
First, a little story. In 1992, during the very first Win32 ( what would become the MS Professional Developers Conference ) conference, I had the opportunity to sit with over some lunch with one Mr. Dave Cutler ( Chief Architect of what most folks would now know as Windows NT,Windows 2000, XP, etc. ).
I was at the time working on the Multimedia group at IBM Boca Raton on what some of you might remember, OS/2. Having worked on OS/2 for several years, and recognizing "the writing on the wall" of where OSes were going, I asked him, "Dave, is Windows NT going to take us into the next century or are there other ideas on your mind ?". His answer to me was as follows:
"M...., Windows NT is the last operating system anyone will ever develop from scratch !". Then he looked over at me, took a sip of his beer, and said, "Then again, you could wake up next Saturday after a particularly good night out with your girl, and have a whole new approach for an operating system, that'll put this to shame."
Putting that conversation into context, and given the fact I'm back in college pursuing my Master's degree ( specializing in Operating Systems design ), I'd say there's TONS of room for new operating systems. The thing is to put things into perspective. What are your target goals for this operating system ? What problem space is it attempting to service ?
Putting this all into perspective will give you an indication of whether you're really setting your sights on an achievable goal.
That all being said, I second an earlier commenters note about looking into things like "Singularity" ( the focus of a talk I gave this past spring in one of my classes .... ), or if you really want to "sink your teeth into" an OS in its infancy....look at "ReactOS".
Then again, WebOSes, like gOS, and the like, are probably where we're headed over the next decade or so. Or then again, someone particularly bright could wake up after a particularly fruitful evening with their lady or guy friend, and have the "next big idea" in operating systems.
Why build the OS directly on a physical machine? You'll just be mucking around in assembly language ;). Sure, that's fun, but why not tackle an OS for a VM?
Say an OS that runs on the Java/.NET/Parrot (you name it) VM, that can easily be passed around over the net and can run a bunch of software.
What would it include?
Some way to store data (traditional FS won't cut it)
A model for processes / threads (or just hijack the stuff provided by the VM?)
Tools for interacting with these processes etc.
So, build a simple Platform that can be executed on a widely used virtual machine. Put in some cool functionality for a specific niche (cloud computing?). Go!
For more information on the micro- versus monolithic kernel, look up Linus' 'discussion' with Andrew Tanenbaum.
I would highly suggest looking at an early version on linux(0.01) to at least get your feet wet. You're going to mucking about with assembly and very obscure low-level stuff to even get started (especially getting into protected mode, multi-tasking, etc). And yes, it's probably true that the "big boys" already have the market cornered. I'm not telling you NOT to do it, but maybe doing some work on the linux kernel would be a better stepping stone.
Check out Cosmos and Singularity, these represent what I want from a futuristic operating system ;-)
Edit :
SharpOS is another managed OS effort. Suggested by yshuditelu
An OS should have no user functionality at all. User functionality should be added by separate projects, which does not at all mean that the projects should not work together!
If you are interested in user functionality maybe you should look into participating in existing Desktop Environment projects such as GNOME, KDE or something.
If you are interested in kernel-level functionality, either try hacking on a BSD derivate or on Linux, or try creating your own system -- but don't think too much about the user functionality then. Getting the core of an operating system right is hard and will take a long time -- wanting to reinvent everything does not make much sense and will get you nowhere.
You might want to join an existing OS implementation project first, or at least look at what other people have implemented.
For example AROS has been some 10 or more years in the making as a hobby OS, and is now quite usable in many ways.
Or how about something more niche? Check out Symbios, which is a fully multitasking desktop (in the style of Windows) operating system - for 4MHz Z80 CPUs (Amstrad CPC, MSX). Maybe you would want to write something like this, which is far less of a bite than a full next-generation operating system.
Bottom line...focus on your goals and even more importantly the goals of others...help to meet those needs. Never start with just technology.
I'd recommend against creating your own Operating System. (My own geeky interruption...Look into Cloud Computing and Amazon EC2)
I totally agree that it would first help by defining what your goals are. I am a big fan of User Experiences and thinking of not only your own goals but the goals of your audience/users/others. Once you have those goals, then move to the next step of how to meet it.
Now days what is an Operation System any way? kernal, Operating System, Virtual Server Instance, Linux, Windows Server, Windows Home, Ubuntu, AIX, zSeries OS/390, et al. I guess this is a good definition of OS... Wikipedia
I like Sun's slogan "the Network is the computer" also...but their company has really fallen in the past decade.
On that note of the Network is the computer... again, I highly recommend, checking out Amazon EC2 and more generally cloud computing.
I think that building a new OS from scratch to resemble the current OSes on the market is a waste of time. Instead, you should think about what Operating System will be like 10-20 years from now. My intuition is that they will be so different as to render them mostly unrecognizable by today's standards. Think of frameworks such as Facebook (gasp!) for models of how future OSes will operate.
I think you're right about our current operating systems being old. Someone said that all operating systems suck. And yes, don't we have problems with them? Call it BSOD, Sad Mac or a Kernel Panic. Our filesystems fail, there are security and reliability problems.
Microsoft pursued interesting approach with its Singularity kernel. It isolates processes in software, using a virtual machine similar to .NET, and formal verification methods. Basically all IPC seems to be formally specified and verified, even before a program is ran.
But there's another problem with it - Singularity is only a kernel. You can't run application not designed for it on it. This is a huge penalty, making eventual transition (Singularity is not public) quite hard. If you manage to produce something of similar technical advantages, but with a real transition plan (think about IPv4->IPv6 problems, or how Windows got so much market share on desktop), that could be huge!
But starting small is not a bad choice either. Linux started just like this, and there are many cases when it leads to better design. Small is beautiful. Easier to change. Easier to grow. Anyway, good luck!
checkout singularity project,
do something revolutionary
I've always wanted an operating system that was basically nothing but a fresh slate. It would have built in plugin support which allow you to build the user interface, applications, whatever you want.
This system would work much like a Lua sandbox to a game would work, minus the limitations. You could build a plugin or module system that would have access to a variety of subsystems that you would use. For example, if you were to write a web browser application, you would need to load the networking library and use that within your plugin script. Need 'security' ? Load the library.
The difference between this and Linux is that, Linux is an operating system but has a windows manager that runs over top of it. In this theoretical operating system, you would be able to implement the generic "look" and "feel" of a variety of windows within the plugin system, or could you create a custom interface.
The difference between this and Windows is that its fully customizable, and by fully I mean if you wanted to not implement any cryptography at all, you can do that, or if you wanted to customize an already existing window, you can do that. Nothing is closed to you.
In this theoretical operating system, there is an OS with a plugin system. The plugin system uses a simple and powerful language.
If you're asking what I'd like to see in an operating system, I can give you a list. I am just getting into programming so I'm not sure if any of this is possible, but I can give you my ideas.
I'd like to see a developed operating system (besides the main ones) in which it ISN'T a pain to get the wireless card to work. That is my #1 pet peeve with most of the ones I've tried out.
It would be cool to see an operating system designed by a programmer for other programmers. Have it so you can run programs for all different operating systems. I don't know if that's possible without having a copy of windows and OSX but it would be really damn cool if I could check the compatablity of programs I write with all operating systems.
You could also consider going with MINIX which is a good starting point.
To the originator of this forum, my hats off to you sir for daring to think in much bolder and idealistic terms regarding the IT industry. First and foremost, Your questions are precisely the kind you would think should engage a much broader audience given the flourishing Computer Sciences all over the globe & the openness taught to us by the Revolutionary Linux OS, which has only begun to win the hearts and minds of so many out there by way of strengthing its user-friendly interface. So kudos on pushing the envelope.
If I'm following correctly, you are supposing that given the fruits of our labor thus far, the development of further hardware & Software concoctions could or at least should be less conventional. The implication, of course, is that any new development would reach its goal faster than what is typical. The prospect, however, of an entirely new OS system #this time would be challenging - to say the least - only because there is so much friction out there already between Linux & Windows. It is really a battle between open source & the proprietary ideologies. Bart Roozendaal in a comment above proves my point nicely. Forget the idea of innovation and whatever possibilities may come from a much more contemporary based Operating System, for such things are secondary. What he is asking essentially is, are you going to be on the side of profit or no? He gives his position away easily here. As you know, Windows is notorious for its monopolistic approach regarding new markets, software, and other technology. It has maintained a deathgrip on its hegemony since its existence and sadly the windows os is racked with endless bugs & backdoors.
Again, I applaud you for your taking a road less travelled and hopefully forgeing ahead and not becoming discouraged. Personally, I'd like to see another OS out there...one much more contemporary.