I recently posted a question about Azure... is it really an OS? I understand the technical details, and I got a lot of fuzzy answers... I really want to know... what do you think is the difference between an OS and a Framework?
Just for reference, Azure will be built on top of Hyper-V servers and the virtual machines will be running vanilla Windows Server 2008. It will run services that creates a cloud on top of the many virtual machines which is called Azure. Windows is calling Azure an OS.
I am trying to understand how to define the difference between an OS and a framework.
Operating System: The infrastructure software component of a computer system
Framework: A re-usable design for a software system (or subsystem).
By these definitions it seems to me, that an operating system can be built using a framework, and a framework can be built to interact with an operating system.
Singularity is an example of an experimental OS that is built using managed code.
Framework is a very broad term, it can be used to describe many types of subsystems. It could even describe an operating system.
Operating System is more specific, it implies facilitation of interaction with a computers or group of computers hardware layer, through the use of human user interfaces. I think Azure fits this description.
It's up to marketing - I don't think the terms have a definite meaning any more.
Is a JVM a framework?
What if it's running on a raw uC or even an FPGA - is it an OS?
An OS is the thing that directly interfaces with the machine, be it virtual or real. It has to expose syscalls that handle input devices, output devices, sound, networking, and all the other things that we take for granted these days. It also often provides some kind of UI which uses these services to make it easy to use/useful for an end-user. It needs to have device drivers to work with video cards, sound cards, etc. (Once again, these can be virtualized).
A framework is... something built on top of the OS. It, too, exposes an API, but they are often not so low-level as the one the OS exposes.
frameworks provide api contracts that oses usually don't - meaning they sit atop the os, hide and manage the differences, and consequently give you that platform independence goodness that can broaden our target audience
A framework thought to be as a development environment,a helping platform for further developments and you can work additively to create some other application using components of framework, while OS is system software is an environment to operate a system.
Related
I have been programming in java for 3 years but I have no experience with other languages. I want to know what I need to study in order to be able to make an operating system. Am most likely going to make my operating system based on Linux kernel. What programming languages should I be familiar with and what aspects of the pc hardware should I study. if you know any online tutorials or good books please mention them.
The answer depends on how far you want to go and how much you want to write yourself vs using existing code.
The most straightforward way is to have a look at Arch Linux or Gentoo and build your own, custom Linux setup. Approximate time needed to create a minimal working system: ~2 hours
You could, otherwise, compile the Linux kernel, build some software packages and put it all together yourself and create your own Linux distro - i.e your own operating system in a sense. Linux From Scratch will be an invaluable resource if this is what you decide to do. Approximate time needed to create a minimal working system: ~2-5 days
Say that's too easy for you but you're not ready to delve into the nitty-gritty of kernel development. You can write your own applications that run on top of the Linux kernel. Typically, you'd need to know C/C++ but any language that supports running on Linux/compiling to a Linux executable will work. Heck, you could chuck a (or write your own) Java runtime on and write your whole 'operating system'/user-space in Java. Approximate time needed to create a minimal working system: ~6-12 months
What about if you want to get down to programming your own kernel from scratch? Heck, Linux is overrated and you want to write your own kernel and it's going to be the next best thing! You would want to learn a bit about the platform you're developing on. At the very least, you'll need to know the assembly instructions of some special operations for your platform that can't be done natively in C like switching CPU modes. You'll definitely need to read up on the OS dev wiki for this and you should have a fair decent computer science background.
At this point, you shouldn't need too much other than a good tutorial, C, a little assembly, reference manuals for the hardware you're hoping to support and 3+ years of computer science background to get you started. Your boot loader that's booting your kernel should handle most of the hardware initialisation. Bootloaders like GRUB (I'm assuming you're developing on a x86 system) does so much for you that it can probably just jump to your kernel main function straight away without you having to do too much. Again, if you wanted, you could port or write your own Java runtime in C and write the rest of your kernel in Java! Approximate time needed to create a minimal working system: ~3-5 years
But, let's just say you're screaming for more pain and you want to write an operating system really from scratch and you don't want no bootloader doing a lot of the legwork. What do you need? Firstly, you'll need a lot more reference manuals. And you'll need to read up on a lot more assembly. Especially for Intel processors, there's a lot of work involved bringing the system up from 16bit mode to 32/64bit protected mode with paging (which is what I assume you want). You'll also want to know about every tiny quirk and weirdness of your platform that will affect your OS (these are often not documented; hooray!). Plus, all of the above. In short, you will need to study everything. Approximate time needed to create a minimal working system: 5+ years
Of course, this post is just scraping the surface of what is needed to even bring up a basic operating system capable of say, opening a web browser. The approximates roughly assumes a minimal working system is something capable of running a graphical web browser and will vary depending on how much you want to write.
I don't mean to be condescending but this is the kind of reality you'll be facing if you decide to write your own operating system. Nevertheless, it is a valuable learning experience if you can break the initial barrier or even just trying to set up your own Linux system.
First, I would say that you install any Linux OS on your system and get accustomed with it.
Second, for OS development you have to know C language. As for the Assembly language, it depends from where you start the OS development. If you will be using available bootloaders, then I don't think that you will be requiring to learn assembly language.
This is a website on OS development: http://wiki.osdev.org/Main_Page
There you will find all the stuff you need to know for the OS development. And also how to develop OS step-by-step.
Now-a-days, a "Eudyptula Challenge" is going on. It is a series of programming exercise for the linux kernel. You can find more info here: http://eudyptula-challenge.org/
Recently I heard the term meta-operating system while I was learning ros. Could you please help me to differentiate between operating system and meta-operating system?
What ROS is and is not is best explained in this paper.
To cite the intodruction at ros.org:
It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management. It also provides tools and libraries for obtaining, building, writing, and running code across multiple computers.
You'll also find a good explanation in this wiki. The basic difference is that a meta operating system is built on top of the operating system and allows different processes (nodes) to communicate with each other at runtime.
I wanted to learn .NET Microframework and found that there is (among others) Netduino which is somehow compatible with Arduino. Recently .NET Gadgeteer came to public. There was a lot of enthusiasm, so it looks like important step for .NET Microframework.
Is it possible to compare them somehow? I'm not sure for what tasks is better Netduino and for what tasks Gadgeteer. Or are they in fact completely different beasts?
I'm unable to read this from information available on home pages, because there are mostly marketing materials.
Netduino (and other HW boards, including GHI's FEZ products) are HW devices with an MicroProcessor running .NET Microframework - but in an form factor that resembles Ardunio, meaning you can plug other boards (or shields) on top of the mainboard to extend its functionality.
.NET Gadgeteer is something different:
.NET Gadgeteer Hardware
A .NET Gadgeteer system is composed of a mainboard containing an embedded processor and a variety of modules which connect to the mainboard through a simple plug-and-play interface. There are lots of .NET Gadgeteer modules available today: display, camera, networking, storage, input controls, and more modules are being designed all the time!
The .NET Gadgeteer mainboard's sockets are numbered, and each is labeled with one or more letters which indicate which modules can be plugged into it
CPU is more powerful than Netduino class of devices.
Gadgeteer runtime
Gadgeteer is 100% C# managed code so it is not tied to any firmware (C++ code). http://gadgeteer.codeplex.com/
It is an “Open socket-connections standard".
You can get a module from company x, another module from company y and use both on mainboard from company z, even if you didn't have design files. All will work together nicely. Of course someone may come up with an advanced model that requires special software but mostly modules will simply work.
You can even make your own modules to go on any mainboard...this is the beauty of gadgeteer.
Think of this as "arduino shield like" but better since there is no pin overlapping and you are not limited to couple shields before the board is too long to be usable. You could even take the gadgeteer socket standard and use it on a board that is not running NETMF at all , but you will lose all the nice software gadgeteer provides.
For details about the runtime check out the documents at Codeplex, http://gadgeteer.codeplex.com/releases/view/72208
For more information check out:
http://research.microsoft.com/en-us/projects/gadgeteer/default.aspx
http://channel9.msdn.com/coding4fun/blog/Along-came-a-spider-a-NET-Gadgeteer-FEZ-Spider
http://www.ghielectronics.com/catalog/product/297/
http://channel9.msdn.com/Blogs/Clint/NET-Gadgeteer
Netduino Go was recently released...supporting both Arduino Shield and Gadgeteer module pin compatibility. It also supports plug-and-play go!bus modules.
A few clarifications on Gadgeteer and Netduino:
Gadgeteer, from a hardware perspective, is a pin-assignment technology like Arduino Shields. There's a similar level of simplicity/complexity to it as with Arduino shields (i.e. overlapping pins, peripherals that go away on one socket when you plug in modules on another socket, fixed number of peripheral features, etc.) In contrast to Arduino, only a subset of Gadgeteer modules will work with a given Gadgeteer mainboard. But with Gadgeteer, you get multiple pin interfaces so there's less pin overlapping.
Netduino Go uses plug-and-play style modules. The go!bus protocol that Netduino Go uses is virtual I/O...so when you plug in a go!bus module it auto-enumerates and adds its features to the mainboard. Similar to how USB works on your computer. No overlapping pins or module limitations.
Netduino Go also has a compatibility mode where you can plug in Gadgeteer modules to up to two sockets. Like with other Gadgeteer-compatible boards, plugging in a module disables functionality on one or more other sockets.
Netduino Go has six times the code storage (1MB, 384KB for code), four times the speed (168MHz), and twice the RAM (100KB+) of Netduino Plus.
More info on Netduino Go:
http://forums.netduino.com/index.php?/topic/3867-introducing-netduino-go/
More info on Gadgeteer:
http://gadgeteer.codeplex.com/
Chris
Secret Labs LLC
Netduino is built with the open source hardware movement in mind and is compatible with existing Arduino shields while allowing you to use the .NET Micro framework to program it. This allows you to leverage existing experience with .NET on that platform instead of having to go through another language.
.NET Gadgeteer is a completely different take on the hardware with a specific set of hardware created for it that is modular and standardized.
Think Netduino as an Erector set and .NET Gadgeteer as Legos. You can build stuff with both of them but one is a bit more powerful if you want to apply what you have created to a broader set of problems.
Initial startup costs to get involved are also cheaper with Netduino.
See: http://www.i-programmer.info/news/91-hardware/2819-net-gadgeteer-an-alternative-to-arduino.html
Really the only downside to the Netduino Go is the current lack of networking as at end May 2012.
Chris has already said (elsewhere) that this is only weeks away, and when it ships I suspect the Go will be the go, as it were. It is to Gadgeteer as C# is to Java - more or less the same but done better with the benefit of hindsight. Looking around the forums I see other platforms with either uneven hardware compatibility or mediocre driver quality.
There's also a possibility of onboard RTC. Not a certainty, but you never know your luck in the big city.
Something Chris (and the Gadgeteer guys) don't take enough credit for is the computer-as-a-network approach Gadgeteer and Go both take. The network stack on a single CPU system like the NetduinoPlus can never perform like one that has dedicated CPU with its own buffer, and pushing the network stack onto its own board gets it out of your application code space. I suspect that the Go, running on a Cortex M3 with a supporting cast of Cortex M0s smoothly integrated by the crunchy goodness of baked in virtualisation, is going to feel like developing on a much larger machine.
Some things none of the prototyping boards do well are
Hardware watchdog reboot for hung application code
OTAU (over the air update)
You need both of those for release grade hardware, which I guess means rolling your own. Netduino Go and Gadgeteer expressly support the notion of roll your own modules.
I saw a question on Linux Kernel. While reading that I had this doubt.
The Windows NT branch of windows has a Hybrid Kernel. It's neither a monolithic kernel where all services run in kernel mode or a Micro kernel where everything runs in user space. This provides a balance between the protection gained from a microkernel and the performance that can be seen in a monolithis kernel (as there are fewer user/kernel mode context switches).
As an example, device drivers and the Hardware Abstraction layer run in kernel node but the Workstation service runs in user mode. The wikipedia article on Hybrid Kernels has a good overview.
The Windows Internals book gives an explanation for the hybrid approach
... The Carnegie Mellon University Mach
operating system, a contemporary
example of a microkernel architecture,
implements a minimal kernel that
comprises thread scheduling, message
passing, virtual memory, and device
drivers. Everything else, including
various APIs, file systems, and
networking, runs in user mode.
However, commercial implementations of
the Mach microkernel operating system
typically run at least all file system,
networking, and memory management
code in kernel mode. The reason is
simple: the pure microkernel design is
commercially impractical because it’s
too inefficient.
According to Wikipedia it's a Hybrid kernel. Which may or may not be just marketing speak for about the same as a monolithic one. The graphic on the latter page does make some things clearer, though.
Most importantly, almost no program on Windows uses the kernel API directly. And the complete Windows API subsystem resides in user space which is a rather large part of the OS as we see it. And in more recent versions Microsoft began to pull more and more device drivers from kernel space into user space (which is especially a good idea with certain drivers, such as for video cards which are probably as complex as an operating system on their own).
hyru hybrid kernel is the name of the Kernel that Windows systems after Windows 98, before that it was a GUI overlaid on DOS using a monolithic Kernel.
Are there any technical benefits to Windows/Microsoft as a platform to use while developing, over a Unix dialect such as Linux or Solaris?
I know that companies choose Microsoft at times because there's simply not enough programmers available that know Unix, or that these programmers are much more expensive to hire.
So assuming all developers knew Unix and Microsoft equally well, would there still be cases where you are better off developing in Windows?
To me there's only two arguments for using Windows as a dev platform:
You have to because you're doing .Net/Windows development (or because the company simply gives you no choice); or
The apps, specifically Microsoft Office/Exchange. I'm sorry but OpenOffice is dreadful in comparison to Word/Excel.
Apart from that imho Linux has every other advantage including:
MUCH faster filesystem (particularly important when dealing with lots of small files). Last year I went from a build time of 8-10 minutes to 2-3 just by this switch (ant build of same code base);
Typically your dev environment then matches your production environment (if you're production environment is Windows your dev environment will be Windows almost guaranteed), which can be useful. We've had issues with Java classpath visibility because of differences between JBoss on Windows and Linux; and
A much better set of command line tools (yes I knwo you can use Cygwin, etc but it's not as good).
That's one reason why I find the idea of a Mac as my next dev workstation so appealling: you can look it as either Unix with applications (ie Office) or Windows with a decent filesystem (will be even better if/when OSX adopts ZFS), either way it's a win. The only thing that's really put me off is that Apple does stupid things like delay Java 6 release by a year just so they can put the Leopard Look and Feel in.
Just off the top of my mind:
.NET (even though mono is really great)
Visual Studio - probably the best IDE around
Excellent documentation (The MSDN Library is way much more developer friendly than man pages in my opinion)
Huge userbase (that's more like a business thing but still it is a very important factor)
Binary compatability (it's much easier to support 4-5 kernels and standard C library versions than the infinite number of combinations you can find in Linux distros)
One of the best things you can do is keep your options open. Chopse a platform independant technology and you'll be able to have software for any O/S or implementation. From a technical standpoint, this makes a lot of sense as well as from a business one.
As for specific technical advantages to the Windows platform, other than the large developer community and development information store and widely supported IDE's like Visual Studio, I'd say you'll be hard pressed to find one. Even there, Eclipse can do just as good a job with a platform independant technology.
Microsoft systems tend to have much better integration between different parts - there's a lot less heterogeneity to worry about if you're using binary-only software (x86 and comctl3d is a lot easier to support than everything *nix runs on).
The learning curve on Windows is shallow to begin with but has a longer overall distance. On Unix/Linux the beginning is a struggle but getting stuff done becomes easier later on, when the inner workings of the OS begin to make sense.
At least that's been my experience with them. Windows for quick payoff, Linux if you're going to be doing something much longer-term. And virtual machines if you can't decide :)
I think this question presents a false dichotomy. There's no reason you have to choose windows over unix or vice versa. Virtualization is free and easy. It's the best of both worlds!
One reason we have Windows development platforms (even though our production is on Linux or Solaris) is common environment for all.
That means all the different populations involved in the realization of a softwares:
are not all developers (business, functional people are also concerned with a working environment)
are all on the same platform (Windows)
use all the same tools to write/communicate (as in Word, PowerPoint)
can have their same environment on laptop
In short: uniformity of environment for all (developers and non-developers alike).
The other reason is depreciation: it is easy to manage depreciation for PCs, where the services are lighter than a full-scale Unix server (like a Sun Fire, a F15K or F50K, ...): the latter needs some expensive assistance service contracts (like "bronze", "silver" or "gold" depending on the level needed). A PC is easier to fix/replace, and is not as critical is a developer "mess up" on it and crash it utterly ;)
That being said, the downside of this is you do not change PC every day: it means managing a large parc of desktops, you cannot just decide to upgrade like that (and that goes for Os too).
So where the other answers are all about "virtual machine" whereas your set of PCs is from 2003, with only 40Go of hard-drive and 1, may be 2Go of memory..., you realize "virtualization" is not always an obvious solution.
Hence, some Unix "integration" server are required for developers to test their products in an environment closer to the target. In a way, this is better, since those integration servers are managed in a uniformed way, avoiding the syndrome of "it works for meTM", as opposed to virtual machine, where each developer is the own root/administrator of one's own little world/server ;).
I can give you a common arguments that Windows folks might make, though not one I necessarily agree with.
People sometimes think that Windows boxes at production time are easier to maintain and deploy. That is because there are a lot of visual tools available to the admin. Therefore they prefer .Net or a Windows-specific development language for easy integration.
If your customers or internal clients use all windows desktop computers, some would argue that its less legwork to do stuff with Windows servers. This includes stuff for Microsoft Office document sharing (i.e. sharepoint) or stuff with Windows File Sharing. Obviously its easier to write a .Net application to deal with such Microsoft-specific constraints.
I can't really think of any other reasons. The latter one is probably the most valid -- there just might be some microsoft-specific technology that is hard to integrate with unless you use MSFT development tools.
Peripheral reasons for some specific kinds of development:
you need to see how things look in both firefox and explorer
you're working with flash (which AFAIK you can't develop on linux, and the players are terrible).
you're working on a project that involved MS office integration
you're office has some godawful mail or notes system that you can't log into any other way. ditto for some vpn setups.
I consider all of these things to be regrettable.
Why not use both?
In either scenario, you could use a virtual machine in either Windows or Linux/Unix for basically nothing using Virtual Box or Vmware player. Or you could remote desktop/vnc to the other platform from your development box. If you develop in .net you would probably be better off on Windows for dev. If you develop for LAMP, either Windows/*nix would be fine.
give me apache mysql (ok postgres in a pinch) php and eclipse .. who cares about the OS ..