(x64) Where can I find CPU instructions usage statistics in contemporary programs? - x86-64

I'm looking for some statistics which would tell me/show how frequently each instruction from x64 instruction set is used overall in modern programs. I have done some google searches, but I can't find any phrase that would give me anything else than "instruction performance statistics", so I'm asking if, by any chance, someone here knows of something like what I'm trying to find.
I'm trying to find info like this because I'm working on my own 64bit CPU (as an interesting excercise, no other ambitions, so don't worry), and beyond the obvious basic instructions that I know are necessary, I'm aware that x64 processors have a huge amount of instructions from... say... exotic to downright (to me) absurdly weird operations. I therefore I would like to know how often are which used in actual programs, so that I can prioritize which one to learn more about and possibly add to my own CPU, based on the assumption that the most used/occuring ones in existing compiled code, even if they seem weird to me, are actually useful.
If nothing of that sort exists, could you at least point me to some kind of disassembler/analyzer which I can use myself, point it at a program/dll, and it would be able to show me instruction usage stats for it?

Related

What are some new technical developments in operating systems?

I need to investigate some new technical developments in operating systems. What would be some interesting developments to write about?
I am looking at any interesting development within the last 5 years and have only come across Azure Sphere
I expect to write about 3 or 4 examples of new technical developments in operating systems
I've spent almost 20 years looking for something that's truly new (for operating systems); and the only thing that I can think of that's recent is the Spectre (and Meltdown) security disaster and associated mitigations.
Almost everything else is the same old ideas regurgitated/reimplemented with fresh marketing hype. For an example (your example) consider Azure Sphere - a low budget/low effort attempt at recycling an existing kernel (Linux, which itself is/was a reimplementation of a crusty lump of memorabilia from late 1960s) where the main technical achievement is slapping on a coat of paint in an attempt to convince suckers that a design originally intended for "dumb terminals connected to mainframe" actually makes sense for modern embedded "Internet of Things" purposes (it's like gluing a muffler on a horse and pretending it's a "new" kind of motorcycle).
Note that there are things that seem "new" but have nothing to do with the underlying OS. One example is augmented reality (e.g. hololens), which is new for user-space, but barely more than a few tweaks to a few APIs as far as the OS itself is concerned (if I remember right, Microsoft just use Windows10, which is mostly just an evolution of ideas originating from Windows NT from the 1990s, if not earlier).
Also note that a major part of the problem is that anything that is actually new breaks compatibility, so it either dies or gets nerfed until it's the same old stuff we've seen hundreds of times before. An example of this is "The Machine" (from HP) - a bunch of enthusiastic goals reduced to "Oops, we're repurposing that and, um, using Linux. None of the things that made it interesting survived. Sorry". The other part of the problem is difficulty.
Of course this shouldn't be unexpected. When any new technology (fire, wheels, combustion engines, electricity, ..) arrives you get a period of "pioneering" where new ideas and breakthroughs are frequent; but then the technology matures and these things become infrequent.
So...
For some advice that might be useful in practice (assuming this is a university assignment); "waffling" is a valuable skill to learn. Take anything that might be vaguely related and stretch the truth. Mumble. Be vague where it matters and provide unnecessary detail where it doesn't matter. Use words like "instantiationalising" to make your professors eye's glaze over so they can't find any meaning hidden by your words (and let them assume it's their fault for being too busy/distracted to understand, and not your fault). For help, try to read research papers in entirety and then see if you can figure out why it's impossible for anyone to remember anything more than "introduction and conclusion". With practice, a skilled waffler can fill 10+ pages without actually saying anything at all.

What is the best way to start programming with Real Time Linux?

Although I have implemented many projects in C, I am completely new to operating systems. I tried real time linux on Discovery board (STM32) and got the correct results for blinking LED but I didn't really understand the whole process since I just followed the steps and could not find whole description for each step on the internet.
I want to implement scheduling on real time linux. What is the best way to start? Any sites, books, tutorials available?
Complete RTLinux process description will be appreciated.
Thanks in adv.
The transition from "bare metal" to OS based programming is something that I experienced in reverse. I started out a complete software guy, totally into the OS side of things and over time I have moved to the opposite of that (even designing circuits in VHDL!). My advice would be to start simple. Linux is pretty complex, and everywhere you look there are many layers of things all working together to deliver the final product. If you are dead set on a real time linux extension, I'd be happy to suggest https://xenomai.org/ which is a real time extension for linux.
However, to more specifically address your question about implementing scheduling in Linux, you can, but it will be a large amount of work and can be very complicated. The OS uses a completely fair scheduling process ( http://en.wikipedia.org/wiki/Completely_Fair_Scheduler ) and whenever you spin up a thread, it simply gets added to the list to run. This can differ slightly if you implement your code in kernel space as a driver, rely on hardware interrupts, etc., but in general, this is how Linux works. Real time generally means that it has the ability to assign threads one of several different priorities and utilize thread preemption fully at any given time which are concepts that aren't really a part of vanilla Linux. It has some notion of this, but it has limitations that can cause problems when you are looking for real time behavior from Linux.
What may be helpful to you is an RTOS. If you are looking for a full on Real Time Operating System, check out FreeRTOS http://www.freertos.org/ . It has a large community and supports a lot of different devices out of the box with a large amount of example code. They even support your specific board with an example package, so you can give it a shot with nothing to lose! http://www.freertos.org/FreeRTOS-for-Cortex-M3-STM32-STM32F100-Discovery.html . It gives you access to many OS ish constructs like network APIs, memory management, and threading without the overhead and latency of a huge OS. With an RTOS, you create tasks and assign them priorities so you become the scheduler and are no longer at the mercy of the OS. You run the OS, not the OS runs you (if that makes sense). Plus, the constructs offered within an RTOS will feel much like bare metal code and thus will be much easier to follow, understand, and fully learn. It is a more simple world to learn the base building blocks of a full blown OS such as Linux or Windows. If this option sounds good, I would suggest looking through the supported devices on FreeRTOS website and picking one you would like to experiment with and then go for it. I would highly recommend this as a way to learn about scheduling and OS constructs in general as it is as simple as you can get and open source. Once you have the basics of an RTOS down, buying a book about Linux specifically wouldn't be a bad idea. Although there are many free resources on the web related to learning about Linux, they are commonly contradictory, and can be misleading. Pile on learning Linux specific knowledge along with OS in general, and it can feel overwhelming. Starting simpler will help keep you from getting burnt out and minimize the amount of time you spend feeling lost. Linux is definitely a learning process, but like with any learning process, start simple, keep your ultimate goal in mind, make a plan, and take small, manageable steps along that plan until you look up and find yourself exactly where you want to be. Then go tackle the next mountain!
The real-time Linux landscape is quite confusing. 99.99% of the information out there is just plain obsolete.
First, there are lots of "microkernels" that run Linux as one task. (Such as the defunct RTLinux). The problem is that you must write your real-time task to a different API, and can't depend on anything in Linux, because Linux will be frozen in the background while your task runs. So unless your task is dead-simple ("stop the motors when I press this button"), this approach will cause more pain than gain.
Next, there is the realtime Linux patch set. This hasn't been doing so well. because of the next item:
Lastly, the current Linux kernel has gotten rid of the problems that caused people to need realtime in the past. You can even turn off Linux on one of your processors to have full control of the CPU. See also this paper.
To answer your question: I see two different paths you could take:
1) Start with a normal 3.xx Linux kernel and explore the various APIs and realtime techniques (i.e. realtime priorities, memory pinning, etc.) This can get you "close enough" for 99% of what people want "realtime" for. If it's good enough for high frequency trading, it's probably good enough for you.
2) If you have a hard realtime requirement and you are worried that Linux won't cut it, then (as Nick mentioned above), just go buy a processor and write your realtime code with no OS. By splitting up your "realtime" and "non-realtime" code onto different CPUs, you will make the whole system simpler and much more robust.
If you want to learn real-time operating systems then I suggest that you get an FPGA, for example the Altera DE2, and experiment with your own operating system and ucos. You can read a good text about embedded RTOS here.
You could also get a Linux Raspberry and write your own operating system for that.

The state of programming and compiling for multicore systems

I'm doing some research on multicore processors; specifically I'm looking at writing code for multicore processors and also compiling code for multicore processors.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
I am aware of the following efforts (some of these don't seem directly related to multicore architectures, but seem to have more to do with parallel-programming models, multi-threading, and concurrency):
Erlang (I know that Erlang includes constructs to facilitate concurrency, but I am not sure how exactly it is being leveraged for multicore architectures)
OpenMP (seems mostly related to multiprocessing and leveraging the power of clusters)
Unified Parallel C
Cilk
Intel Threading Blocks (this seems to be directly related to multicore systems; makes sense as it comes from Intel. In addition to defining certain programming-constructs, it also seems have features that tell the compiler to optimize the code for multicore architectures)
In general, from what little experience I have with multithreaded programming, I know that programming with concurrency and parallelism in mind is definitely a difficult concept. I am also aware that multithreaded programming and multicore programming are two different things. in multithreaded programming you are ensuring that the CPU does not remain idle (on a single-CPU system. As James pointed out the OS can schedule different threads to run on different cores -- but I'm more interested in describing the parallel operations from the language itself, or via the compiler). As far as I know you cannot truly do parallel operations. In multicore systems, you should be able to perform truly-parallel operations.
So it seems to me that currently the problems facing multicore programming are:
Multicore programming is a difficult concept that requires significant skill
There are no native constructs in today's programming languages that provide a good abstraction to program for a multicore environment
Other than Intel's TBB library I haven't found efforts in other programming-languages to leverage the power of multicore architectures for compilation (for example, I don't know if the Java or C# compiler optimizes the bytecode for multicore systems or even if the JIT compiler does that)
I'm interested in knowing what other problems there might be, and if there are any solutions in the works to address these problems. Links to research papers (and things of that nature) would be helpful. Thanks!
EDIT
If I had to condense my question down to one sentence, it would be this: What are the problems that face multicore programming today and what research is going on in the field to solve these problems?
UPDATE
It also seems to me that there are three levels where multicore needs to be concerned:
Language level: Constructs/concepts/frameworks that abstract parallelization and concurrency and make it easy for programmers to express the same
Compiler level: If the compiler is aware of what architecture it is compiling for, it can optimize the compiled code for that architecture.
OS level: The OS optimizes the running process and perhaps schedules different threads/processes to run on different cores.
I've searched on ACM and IEEE and have found a few papers. Most of them talk about how difficult it is to think concurrently and also how current languages don't have a proper way to express concurrency. Some have gone so far as to claim that the current model of concurrency that we have (threads) is not a good way to handle concurrency (even on multiple cores). I'm interested in hearing other views.
I'm curious about the major problems in this field that would currently prevent a widespread adoption of programming techniques and practices to fully leverage the power of multicore architectures.
Inertia. (BTW: that's pretty much the answer to all "what does prevent the widespread adoption" questions, whether that be models of parallel programming, garbage collection, type safety or fuel-efficient automobiles.)
We have known since the 1960s that the threads+locks model is fundamentally broken. By ~1980, we had about a dozen better models. And yet, the vast majority of languages that are in use today (including languages that were newly created from scratch long after 1980), offer only threads+locks.
The major problems with multicore programming is the same as writing any other concurrent applications, but whereas before it was uncommon to have multiple cpus in a computer, now it is hard to find any modern computer with only one core in it, so, to take advantage of multicore, multiple cpu architectures there are new challenges.
But, this problem is an old problem, whenever computer architectures go beyond compilers then it seems the fallback solution is to move back toward functional programming, as that programming paradigm, if strictly followed, can make very parallelizable programs, as you don't have any global mutable variables, for example.
But, not all problems can be done easily using FP, so the goal then is how to easily get other programming paradigms to be easy to use on multicores.
The first thing is that many programmers have avoided writing good mulithreaded applications, so there isn't a strongly prepared number of developers, as they learned habits that will make their coding harder to do.
But, as with most changes to the cpu, you can look at how to change the compiler, and for that you can look at Scala, Haskell, Erlang and F#.
For libraries you can look at the parallel framework extension, by MS as a way to make it easier to do concurrent programming.
It is at work, but I recently either IEEE Spectrum or IEEE Computer had articles on multicore programming issues, so look at what IEEE and ACM articles have been written on these issues, to get more ideas as to what is being looked at.
I think the biggest impediment will be the difficulty to get programmers to change their language as FP is very different than OOP.
One place for research besides developing languages that will work well this way, is how to handle multiple threads accessing memory, but, as with much in this area, Haskell seems to be at the forefront in testing ideas for this, so you can look at what is going on with Haskell.
Ultimately there will be new languages, and it may be that we have DSLs to help abstract the developer more, but how to educate programmers on this will be a challenge.
UPDATE:
You may find Chapter 24. Concurrent and multicore programming of interest, http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.html
One of the answers mentioned the Parallel Extensions for the .NET Framework and since you mentioned C#, it's definitely something I would investigate. Microsoft has done something interesting things there, though I have to think many of their efforts seem more suited for language enhancements in C# than a separate and distinct library for concurrent programming. But I think their efforts are worth applauding and respect that we're early here. (Disclaimer: I used to be the marketing director for Visual Studio about 3 years ago)
The Intel Thread Building Blocks are also quite interesting (Intel recently released a new version, and I'm excited to head down to Intel Developer Forum next week to learn more about how to use it properly).
Lastly, I work for Corensic, a software quality startup in Seattle. We've got a tool called Jinx that is designed to detect concurrency errors in your code. A 30-day trial edition is available for Windows and Linux, so you might want to check it out. (www.corensic.com)
In a nutshell, Jinx is a very thin hypervisor that, when activated, slips in between the processor and operating system. Jinx then intelligently takes slices of execution and runs simulations of various thread timings to look for bugs. When we find a particular thread timing that will cause a bug to happen, we make that timing "reality" on your machine (e.g., if you're using Visual Studio, the debugger will stop at that point). We then point out the area in your code where the bug was caused. There are no false positives with Jinx. When it detects a bug, it's definitely a bug.
Jinx works on Linux and Windows, and in both native and managed code. It is language and application platform agnostic and can work with all your existing tools.
If you check it out, please send us feedback on what works and doesn't work. We've been running Jinx on some big open source projects and already are seeing situations where Jinx can find bugs 50-100 times faster than simply stress testing code.
The bottleneck of any high-performance application (written in C or C++) designed to make efficient use of more than one processor/core is the memory system (caches and RAM). A single core usually saturates the memory system with its reads and writes so it is easy to see why adding extra cores and threads causes an application to run slower. If a queue of people can pass through a door one a time, adding extra queues will not only clog the door but also make the passage of any one individual through the door less efficient.
The key to any multi-core application is optimization of and economizing on memory accesses. This means structuring data and code to work as much as possible inside their own caches where they don't disturb the other cores with acceses to the common cache (L3) or RAM. Once in a while a core needs to venture there but the trick is to reduce those situations as much as possible. In particular, data needs to be structured around and adapted to cache lines and their sizes (currently 64 bytes) and code needs to be compact and not call and jump all over the place which also disrupts pipelines.
My experience is that efficient solutions are unique to the application in question. The generic guidelines (above) are a basis on which to construct code but the tweak changes resulting from profiling conclusions will not be obvious to those who were not themselves involved in the optimizing work.
Look up fork/join frameworks and work-stealing runtimes. Two names for the same, or at least related, approaches, which is to recursively subdivide large tasks into lightweight units, such that all available parallelism is exploited, without having to know in advance how much parallelism there is. The idea is that it should run at serial speed on a uniprocessor, but get a linear speedup with multiple cores.
Sort of a horizontal analogue of cache-oblivious algorithms if you look at it right.
But i'd say the main problem facing multicore programming is that the great majority of computations remain stubbornly serial. There's just no way to throw multiple cores at those computations and make them stick.

Providing suggested solutions in error messages

Developer tools and software typically do not provide solution suggestions in error messages. This makes sense for compilers because they are supposed to tell precisely what went wrong.
There are "lint" tools to provide suggestions, but AFAIK, few developers use lint tools regularly or even at all.
There is a large set of developer-oriented software that would do well to have a "suggested solution(s)" part to error messages. This is one of the great features that IDEs like Eclipse have. But software like web application frameworks, standard/popular libraries, etc. do not have this helpful feature.
Is this something that is just lacking in user-friendly design (one can consider this unnecessary, given that Google is so good) or is there a good reason for it? Do any compilers, frameworks, platforms that you use provide error messages with solution suggestions, if none, why not?
What do you want to see?
Error: Null Pointer Exception (suggested solution: Set the object to something).
I mean, it's not the error writers job to educate you. I prefer simple error messages that point to the exact problem, so I myself can determine just what is causing it this time. For me, this certainly lines in the domain of 3rd party tools; perhaps the compilers could provide extensive context to them, to do their analysis, but it's not something I would really find valuable.
The primary thing I want from a compiler or runtime error is context - where did it happen and where was it called from when it failed.
I think most modern compilers and runtimes (Java, Ruby, Go) do a decent job there, with line numbers and stack traces you can find most bugs. Even the Javascript options getting good, it certainly beats the good old "alert()" approach to debugging.
Isn't it fair enough to leave suggested solutions to the IDEs?
But I do agree that I have seen frameworks/libraries that were very sparse with error messages, and "NullPointerException at line 264" deep inside some third-party library where you do not have the source code tells you very close to nothing.
If this is an issue, I think it is primarily restricted to third-party libraries. The "good reason" is presumably that it was developed in a hurry in somebody's spare time, and they did not put meaningful error messages very high on the priority list.
It's hard for the solution to an error being presented. There are so many possibilities and as #silky pointed out, some just cannot be diagnosed.
Warnings are a different beast. In many situations modern compilers use these to say "I think you meant X when you said Y; you might want to check that."
A programming language has the opportunity to be the ultimate in flexibility in terms of user interface. You can make the computer do anything you want. The flip side of the coin is that if you type so much as one character wrong, it might have no idea along which axis your mistake was made, or where.
Systems with less flexibility offer more opportunities for offering solutions to problems. If you type (a b c) to your Lisp compiler and it doesn't know what a is, it's so close to so many valid lines of code that it can't exactly suggest a single fix. If you misspell "IDENTIFICATION DIVISION" at the start of your COBOL program, it's relatively easy for the compiler to spot the error and help you out. Most other languages lie between these extremes.
Programmers tend to move, over their careers, from less powerful and more structured languages, into more powerful and flexible languages. (At least, that's what I saw happen before Javascript became such a hot newbie language.) This means their discipline improves to the point where they are able to use tools that offer power at the expense of being told what to do. The environments I've used that can tell me what to fix, tend to be those that I dislike using.
It's no different from any other art. Look at musicians or painters or martial artists or actors or writers or chefs or even people learning to speak Spanish: when they're young and inexperienced, they're put in a system where there's a lot of structure, and if they make a mistake somebody can easily correct them. As they become more skilled, they need and want less and less support. When they've become experts themselves, they need no support at all, but the flip side of the coin is that you can't as easily point out what's right or wrong. If your kid colors outside the lines, you can explain the issue, but if Picasso or Pollock makes a bad brushstroke, what would you say? Or if Philip Glass puts a note out of place, or Bruce Lee turns his body too far into a punch? And who would want to work in an art form that's so limited that profane things aren't possible? COBOL compilers still exist if anybody really wants them, but far more people pay money for awful paintings than masterful color-by-number prints.
More directly, there's a site, ErrorHelp (nee bug.gd), that lets you type in an error message and get a result, and it's older than SO but nobody uses it. I've tried. Unless you're in a context where there's only one possible answer, a simple problem-encountered to suggested-solution dictionary does not work, and therefore it's an utter failure in any creative field.
Most IDE's have their own compilers. This allows them to do partial compilation, code refactorings, and many other tricks. I find the error messages and suggestions quite useful. Just because the compiler isn't invoked on the command line, doesn't mean it's not a compiler.
(source: theeggeadventure.com)

If I were to build a new operating system, what kind of features would it have? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am toying with the idea of creating an completely new operating system and would like to hear what everyone on this forums take is on that? First is it too late are the big boys so entrenched in our lives that we will never be able to switch (wow - what a terrible thought...). But if this is not the case, what should a operating system do for you? What features are the most important? Should all the components be separate installations (in other words - should the base OS really have no user functionality and that gets added on by creating "plug-ins" kind of like a good flexible tool?)
Why do I want to do this... I am more curious about whether there is a demand and I am wondering, since the OSes we use most today (Linux, Windows, Mac OS X (Free BSD)) were actually written more than 20 years ago (and I am being generous - I mean dual and quad cores did not exist back then, buses were much slower, hardware was much more expensive, etc,...), I was just curious with the new technology if we would do anything differently?
I am anxious to read your comments.
To answer the first question: It's never too late. Especially when it comes to niche market segments and stuff like that.
Second though, before you start down the path of creating a new OS, you should understand the kind of undertaking it is: it'd be a massive project.
Is it just a normal programmer "scratch the itch" kind of project? If so, then by all means go ahead -- you might learn alot of things by doing it. But if you're doing it for the resulting product, then you shouldn't start down that path until you've looked at all the current OSes under development (there are alot more than you'd think at first) and figured out what you'd like to change in them.
Quite possibly the effort would be better spent improving/changing an existing open source system. Even for your own experimentation, it may be easier to get the results you want if you start out with something already in development.
First, a little story. In 1992, during the very first Win32 ( what would become the MS Professional Developers Conference ) conference, I had the opportunity to sit with over some lunch with one Mr. Dave Cutler ( Chief Architect of what most folks would now know as Windows NT,Windows 2000, XP, etc. ).
I was at the time working on the Multimedia group at IBM Boca Raton on what some of you might remember, OS/2. Having worked on OS/2 for several years, and recognizing "the writing on the wall" of where OSes were going, I asked him, "Dave, is Windows NT going to take us into the next century or are there other ideas on your mind ?". His answer to me was as follows:
"M...., Windows NT is the last operating system anyone will ever develop from scratch !". Then he looked over at me, took a sip of his beer, and said, "Then again, you could wake up next Saturday after a particularly good night out with your girl, and have a whole new approach for an operating system, that'll put this to shame."
Putting that conversation into context, and given the fact I'm back in college pursuing my Master's degree ( specializing in Operating Systems design ), I'd say there's TONS of room for new operating systems. The thing is to put things into perspective. What are your target goals for this operating system ? What problem space is it attempting to service ?
Putting this all into perspective will give you an indication of whether you're really setting your sights on an achievable goal.
That all being said, I second an earlier commenters note about looking into things like "Singularity" ( the focus of a talk I gave this past spring in one of my classes .... ), or if you really want to "sink your teeth into" an OS in its infancy....look at "ReactOS".
Then again, WebOSes, like gOS, and the like, are probably where we're headed over the next decade or so. Or then again, someone particularly bright could wake up after a particularly fruitful evening with their lady or guy friend, and have the "next big idea" in operating systems.
Why build the OS directly on a physical machine? You'll just be mucking around in assembly language ;). Sure, that's fun, but why not tackle an OS for a VM?
Say an OS that runs on the Java/.NET/Parrot (you name it) VM, that can easily be passed around over the net and can run a bunch of software.
What would it include?
Some way to store data (traditional FS won't cut it)
A model for processes / threads (or just hijack the stuff provided by the VM?)
Tools for interacting with these processes etc.
So, build a simple Platform that can be executed on a widely used virtual machine. Put in some cool functionality for a specific niche (cloud computing?). Go!
For more information on the micro- versus monolithic kernel, look up Linus' 'discussion' with Andrew Tanenbaum.
I would highly suggest looking at an early version on linux(0.01) to at least get your feet wet. You're going to mucking about with assembly and very obscure low-level stuff to even get started (especially getting into protected mode, multi-tasking, etc). And yes, it's probably true that the "big boys" already have the market cornered. I'm not telling you NOT to do it, but maybe doing some work on the linux kernel would be a better stepping stone.
Check out Cosmos and Singularity, these represent what I want from a futuristic operating system ;-)
Edit :
SharpOS is another managed OS effort. Suggested by yshuditelu
An OS should have no user functionality at all. User functionality should be added by separate projects, which does not at all mean that the projects should not work together!
If you are interested in user functionality maybe you should look into participating in existing Desktop Environment projects such as GNOME, KDE or something.
If you are interested in kernel-level functionality, either try hacking on a BSD derivate or on Linux, or try creating your own system -- but don't think too much about the user functionality then. Getting the core of an operating system right is hard and will take a long time -- wanting to reinvent everything does not make much sense and will get you nowhere.
You might want to join an existing OS implementation project first, or at least look at what other people have implemented.
For example AROS has been some 10 or more years in the making as a hobby OS, and is now quite usable in many ways.
Or how about something more niche? Check out Symbios, which is a fully multitasking desktop (in the style of Windows) operating system - for 4MHz Z80 CPUs (Amstrad CPC, MSX). Maybe you would want to write something like this, which is far less of a bite than a full next-generation operating system.
Bottom line...focus on your goals and even more importantly the goals of others...help to meet those needs. Never start with just technology.
I'd recommend against creating your own Operating System. (My own geeky interruption...Look into Cloud Computing and Amazon EC2)
I totally agree that it would first help by defining what your goals are. I am a big fan of User Experiences and thinking of not only your own goals but the goals of your audience/users/others. Once you have those goals, then move to the next step of how to meet it.
Now days what is an Operation System any way? kernal, Operating System, Virtual Server Instance, Linux, Windows Server, Windows Home, Ubuntu, AIX, zSeries OS/390, et al. I guess this is a good definition of OS... Wikipedia
I like Sun's slogan "the Network is the computer" also...but their company has really fallen in the past decade.
On that note of the Network is the computer... again, I highly recommend, checking out Amazon EC2 and more generally cloud computing.
I think that building a new OS from scratch to resemble the current OSes on the market is a waste of time. Instead, you should think about what Operating System will be like 10-20 years from now. My intuition is that they will be so different as to render them mostly unrecognizable by today's standards. Think of frameworks such as Facebook (gasp!) for models of how future OSes will operate.
I think you're right about our current operating systems being old. Someone said that all operating systems suck. And yes, don't we have problems with them? Call it BSOD, Sad Mac or a Kernel Panic. Our filesystems fail, there are security and reliability problems.
Microsoft pursued interesting approach with its Singularity kernel. It isolates processes in software, using a virtual machine similar to .NET, and formal verification methods. Basically all IPC seems to be formally specified and verified, even before a program is ran.
But there's another problem with it - Singularity is only a kernel. You can't run application not designed for it on it. This is a huge penalty, making eventual transition (Singularity is not public) quite hard. If you manage to produce something of similar technical advantages, but with a real transition plan (think about IPv4->IPv6 problems, or how Windows got so much market share on desktop), that could be huge!
But starting small is not a bad choice either. Linux started just like this, and there are many cases when it leads to better design. Small is beautiful. Easier to change. Easier to grow. Anyway, good luck!
checkout singularity project,
do something revolutionary
I've always wanted an operating system that was basically nothing but a fresh slate. It would have built in plugin support which allow you to build the user interface, applications, whatever you want.
This system would work much like a Lua sandbox to a game would work, minus the limitations. You could build a plugin or module system that would have access to a variety of subsystems that you would use. For example, if you were to write a web browser application, you would need to load the networking library and use that within your plugin script. Need 'security' ? Load the library.
The difference between this and Linux is that, Linux is an operating system but has a windows manager that runs over top of it. In this theoretical operating system, you would be able to implement the generic "look" and "feel" of a variety of windows within the plugin system, or could you create a custom interface.
The difference between this and Windows is that its fully customizable, and by fully I mean if you wanted to not implement any cryptography at all, you can do that, or if you wanted to customize an already existing window, you can do that. Nothing is closed to you.
In this theoretical operating system, there is an OS with a plugin system. The plugin system uses a simple and powerful language.
If you're asking what I'd like to see in an operating system, I can give you a list. I am just getting into programming so I'm not sure if any of this is possible, but I can give you my ideas.
I'd like to see a developed operating system (besides the main ones) in which it ISN'T a pain to get the wireless card to work. That is my #1 pet peeve with most of the ones I've tried out.
It would be cool to see an operating system designed by a programmer for other programmers. Have it so you can run programs for all different operating systems. I don't know if that's possible without having a copy of windows and OSX but it would be really damn cool if I could check the compatablity of programs I write with all operating systems.
You could also consider going with MINIX which is a good starting point.
To the originator of this forum, my hats off to you sir for daring to think in much bolder and idealistic terms regarding the IT industry. First and foremost, Your questions are precisely the kind you would think should engage a much broader audience given the flourishing Computer Sciences all over the globe & the openness taught to us by the Revolutionary Linux OS, which has only begun to win the hearts and minds of so many out there by way of strengthing its user-friendly interface. So kudos on pushing the envelope.
If I'm following correctly, you are supposing that given the fruits of our labor thus far, the development of further hardware & Software concoctions could or at least should be less conventional. The implication, of course, is that any new development would reach its goal faster than what is typical. The prospect, however, of an entirely new OS system #this time would be challenging - to say the least - only because there is so much friction out there already between Linux & Windows. It is really a battle between open source & the proprietary ideologies. Bart Roozendaal in a comment above proves my point nicely. Forget the idea of innovation and whatever possibilities may come from a much more contemporary based Operating System, for such things are secondary. What he is asking essentially is, are you going to be on the side of profit or no? He gives his position away easily here. As you know, Windows is notorious for its monopolistic approach regarding new markets, software, and other technology. It has maintained a deathgrip on its hegemony since its existence and sadly the windows os is racked with endless bugs & backdoors.
Again, I applaud you for your taking a road less travelled and hopefully forgeing ahead and not becoming discouraged. Personally, I'd like to see another OS out there...one much more contemporary.