How are Operating Systems "Made"? - operating-system

Creating an OS seems like a massive project. How would anyone even get started?
For example, when I pop Ubuntu into my drive, how can my computer just run it?
(This, I guess, is what I'd really like to know.)
Or, looking at it from another angle, what is the least amount of bytes that could be on a disk and still be "run" as an OS?
(I'm sorry if this is vague. I just have no idea about this subject, so I can't be very specific. I pretend to know a fair amount about how computers work, but I'm utterly clueless about this subject.)

Well, the answer lives in books: Modern Operating Systems - Andrew S. Tanenbaum is a very good one. The cover illustration below.
The simplest yet complete operating system kernel, suitable for learning or just curiosity, is Minix.
Here you can browse the source code.
(source: cs.vu.nl)

Operating Systems is a huge topic, the best thing I can recommend you if you want to go really in depth on how a operating systems are designed and construced it's a good book:
Operating System Concepts

If you are truly curious I would direct you to Linux from Scratch as a good place to learn the complete ins and outs of an operating system and how all the pieces fit together. If that is more information than you are looking for then this Wikipedia article on operating systems might be a good place to start.

A PC knows to look at a specific sector of the disk for the startup instructions. These instructions will then tell the processor that on given processor interrupts, specific code is to be called. For example, on a periodic tick, call the scheduler code. When I get something from a device, call the device driver code.
Now how does an OS set up everything with the system? Well hardware's have API's also. They are written with the Systems programmer in mind.
I've seen a lot of bare-bones OS's and this is really the absolute core. There are many embedded home-grown OS's that that's all they do and nothing else.
Additional features, such as requiring applications to ask the operating system for memory, or requiring special privileges for certain actions, or even processes and threads themselves are really optional though implemented on most PC architectures.

The operating system is, simply, what empowers your software to manage the hardware. Clearly some OSes are more sophisticated than others.
At its very core, a computer starts executing at a fixed address, meaning that when the computer starts up, it sets the program counter to a pre-defined address, and just starts executing machine code.
In most computers, this "bootstrapping" process immediately initializes known peripherals (like, say, a disk drive). Once initialized, the bootstrap process will use some predefined sequence to leverage those peripherals. Using the disk driver again, the process might read code from the first sector of the hard drive, place it in a know space within RAM, and then jump to that address.
These predefined sequence (the start of the CPU, the loading of the disk) allows the programmers to star adding more and more code at the early parts of the CPU startup, which over time can, eventually, start up very sophisticated programs.
In the modern world, with sophisticated peripherals, advanced CPU architectures, and vast, vast resources (GBs or RAM, TB of Disk, and very fast CPUs), the operating system can support quite powerful abstractions for the developer (multiple processes, virtual memory, loadable drivers, etc.).
But for a simple system, with constrained resourced, you don't really need a whole lot for an "OS".
As a simple example, many small controller computers have very small "OS"es, and some may simply be considered a "monitor", offering little more than easy access to a serial port (or a terminal, or LCD display). Certainly, there's not a lot of needs for a large OS in these conditions.
But also consider something like a classic Forth system. Here, you have a system with an "OS", that gives you disk I/O, console I/O, memory management, plus the actual programming language as well as an assembler, and this fits in less than 8K of memory on an 8-Bit machine.
or the old days of CP/M with its BIOS and BDOS.
CP/M is a good example of where a simple OS works well as a abstraction layer to allow portable programs to run on a vast array of hardware, but even then the system took less than 8K of RAM to start up and run.
A far cry from the MBs of memory used by modern OSes. But, to be fair, we HAVE MBs of memory, and our lives are MUCH MUCH simpler (mostly), and far more functional, because of it.
Writing an OS is fun because it's interesting to make the HARDWARE print "Hello World" shoving data 1 byte at a time out some obscure I/O port, or stuffing it in to some magic memory address.
Get a x86 emulator and party down getting a boot sector to say your name. It's a giggly treat.

Basically... your computer can just run the disk because:
The BIOS includes that disk device in the boot order.
At boot, the BIOS scans all bootable devices in order, like the floppy drive, the harddrive, and the CD ROM. Each device accesses its media and checks a hard-coded location (typically a sector, on a disk or cd device) for a fingerprint that identifies the media, and lists the location to jump to on the disk (or media) where instructions start. The BIOS tells the device to move its head (or whatever) to the specified location on the media, and read a big chunk of instructions. The BIOS hands those instructions off to the CPU.
The CPU executes these instructions. In your case, these instructions are going to start up the Ubuntu OS. They could just as well be instructions to halt, or to add 10+20, etc.
Typically, an OS will start off by taking a large chunk of memory (again, directly from the CPU, since library commands like 'GlobalAlloc' etc aren't available as they're provided by the yet-to-be-loaded-OS) and starts creating structures for the OS itself.
An OS provides a bunch of 'features' for applications: memory management, file system, input/output, task scheduling, networking, graphics management, access to printers, and so on. That's what it's doing before you 'get control' : creating/starting all the services so later applications can run together, not stomp on each other's memory, and have a nice API to the OS provided services.
Each 'feature' provide by the OS is a large topic. An OS provides them all so applications just have to worry about calling the right OS library, and the OS manages situations like if two programs try to print at the same time.
For instance, without the OS, each application would have to deal with a situation where another program is trying to print, and 'do something' like print anyway, or cancel the other job, etc. Instead, only the OS has to deal with it, applications just say to the OS 'print this stuff' and the OS ensure one app prints, and all other apps just have to wait until the first one finishes or the user cancels it.
The least amount of bytes to be an OS doesn't really make sense, as an "OS" could imply many, or very few, features. If all you wanted was to execute a program from a CD, that would be very very few bytes. However, that's not an OS. An OS's job is to provide services (I've been calling them features) to allow lots of other programs to run, and to manage access to those services for the programs. That's hard, and the more shared resources you add (networks, and wifi, and CD burners, and joysticks, and iSight video, and dual monitors, etc, etc) the harder it gets.

http://en.wikipedia.org/wiki/Linux_startup_process you are probably looking for this.
http://en.wikipedia.org/wiki/Windows_NT_startup_process or this.

One of the most recent operating system projects I've seen that has a serious backing has been a MS Research project called Singularity, which is written entirely in C#.NET from scratch.
To get an idea how much work it takes, there are 2 core devs but they have up to a dozen interns at any given time, and it still took them two years before they could even get the OS to a point where it would bootup and display BMP images (it's how they use to do their presentations). It took much more work before they could even get to a point where there was a command line (like about 4yrs).

Basically, there are many arguments about what an OS actually is. If you got everyone agreed on what an OS specifically is (is it just the kernel? everything that runs in kernel mode? is the shell part of OS? is X part of OS? is Web browser a part of OS?), your question is answered! Otherwise, there's no specific answer to your question.

Oh, this is a fun one. I've done the whole thing at one point or another, and been there through a large part of the evolution.
In general, you start writing a new OS by starting small. The simplest thing is a bootstrap loader, which is a small chunk of code that pulls a chunk of code in and runs it. Once upon a time, with the Nova or PDP computers, you could enter the bootstrap loader through the front panel: you entered the instructions hex number by hex number. The boot loader than reads some medium into memory, and set the program counter to the start address of that code.
That chunk of code usualy loads something else, but it doesn't have to: you can write a program that's meant to run on the bare metal. That sort of program does something useful on its own.
A real operating system is bigger, and has more pieces. you need to load programs, put them in memory, and run them; you need to provide code to run the IO devices; as it gets bigger, you need to manage memory.
If you want to really learn how it works, find Doug Comer's Xinu books, and Andy Tannenbaum's newest operating system book on Minix.

You might want to get the book The Design and Implementation of the FreeBSD Operating system for a very detailed answer. You can get it from Amazon or this link to FreeBSD.org's site looks like the book as I remember it: link text

Try How Computers Boot Up, The Kernel Boot Process and other related articles from the same blog for a short overview of what a computer does when it boots.
What a computer does when its start is heavily dependent (maybe obvious?) on the CPU design and other "low-level stuff"; therefore it's kind of difficult to anticipate what your computer does when it boots.

I can't believe this hasn't been mentioned... but a classic book for an overview of operating system design is Operating Systems - Design and Implementation written by Andrew S Tanenbaum, the creator of MINIX. A lot of the examples in the book are geared directly towards MINIX as well.
If you would like to learn a bit more, OS Dev is a great place to start. Especially the wiki. This site is full of information as well as developers who write personal operating systems for a small project/hobby. It's a great learning resource too, as there are many people in the same boat as you on OSDev who want to learn what goes into an OS. You might end up trying it yourself eventually, too!

the operating system (OS) is the layer of software that controls the hardware. The simpler the hardware, the simpler the OS, and vice-versa ;-)
if the early days of microcomputers, you could fit the OS into a 16K ROM and hard-wire the motherboard to start executing machine code instructions at the start of the ROM address space. This 'bootstrap' process would then load the code for the drivers for the other devices like the keyboard, monitor, floppy drive, etc., and within a few seconds your machine would be booted and ready for use.
Nowadays... same principle, but a lot more and more complex hardware ;-)

Well you have something linking the startup of the chip to a "bios", then to a OS, that is usually a very complicated task done by a lot of services of code.
If you REALY want to know more about this i would recomend reading a book... about microcontrllers, especially one where you create a small OS in c for a 8051 or the like.. or learn some x86 assembly and create a very small "bootloader OS".

You might want to check out this question.

An OS is a program, just like any other application you write. The main purpose of this program is that it allows you to run other programs. Modern OSes take advantage of modern hardware to ensure that programs do not clash with one another.
If you are interested in writing your own OS, check out my own question here:
How to get started in operating system development

You ask how few bytes could you put on disk and still run as an OS? The answer depends on what you expect of your OS, but the smallest useful OS that I know of fits in 1.7 Megabytes. It is Tom's Root Boot disk and it is a very nice if small OS with "rescue" applications that fits on one floppy disk. Back in the days when every machine had a floppy drive and not every machine had a CD-ROM drive, I used to use it frequently.

My take on it is that it is like your own life. AT first, you know very little - just enough to get along. This is similar to what the BIOS provides - it knows enough to look for a disk drive and read information off of it. Then you learn a little bit more when you go to elementary school. This is like the boot sector being read into memory and being given control. Then you go to high school, which is like the OS kernel loading. Then you go to college (drivers and other applications.) Of course, this is the point at which you are liable to CRASH. HE HE.
Bottom line is that layers of more and more capability are slowly loaded on. There's nothing magic about an OS.

Reading through here will give you an idea of what it took to create Linux
https://netfiles.uiuc.edu/rhasan/linux/

Another really small operating system that fits on one disk is QNX (when I last looked at it a long time ago, the whole OS, with GUI interface, web browser, disk access and a built in web server, fit on one floppy drive).
I haven't heard too much about it since then, but it is a real time OS so it is designed to be very fast.

Actually, some people visit a 4-year college to get a rough idea on this..

At its core, OS is extremely simple. Here's the beginner's guide to WHAT successful OS are made to do:
1. Manage CPU using scheduler which decides which process (program's running instance) to be scheduled.
2. Manage memory to decide which all processes use it for storing instruction(code) and data(variables).
3. Manage I/O interfaces such as disk drives, alarms, keyboard, mouse.
Now, above 3 requirements give rise to need for processes to communicate(and not fight!), to interact with outside world, help applications to do what they want to do.
To dig deeper into HOW it does that, read Dinosaur book :)
So, you can make OS as small as you want to as long as you manage to handle all hardware resources.
When you bootup, BIOS tells CPU to start reading bootloader(which loads first function of OS which resides at fixed address in memory--something like main() of small C program). Then this creates functions and processes and threads and starts the big bang!

Firstly, reading reading and reading about, what is OS; then what are the uses/ Types/ nature / objective/ needs/ of the different OS's.
Some of the links are as follows; newbie will enjoy these links:
Modern OS - this gives Idea about general OS.
Start of OS - this gives basics of what it really takes to MAKE OS, how we can make it and how one can modify a present open source code of OS by himself.
Wiki OS - Gives idea about the different Os's used in different fields and uses of it(Objects / features of OS.)
Let's see in general what OS contains (Not the sophisticatedLinux or Windows)
OS need a CPU and to dump a code in it you need a bootloader.
OS must be have the objectives to fullfill and those objectives mustbe defined in a wrapper which is called Kernel
Inside you could have scheduling time and ISR's (Depends on the objective and OS you need to make)

OS development is complicated. There are some websites like osdev or lowlevel.eu (german) dedicated to the topic. There are also some books, that others have mentioned already.
I can't help but also reference the "Write your own operating system" video series on youtube, as I'm the one who made it :-)
See https://www.youtube.com/playlist?list=PLHh55M_Kq4OApWScZyPl5HhgsTJS9MZ6M

Related

Stored Program Computer in modern computing

I was given this exact question on a quiz.
Question
Answer
Does the question make any sense? My understanding is that the OS schedules a process and manages what instructions it needs the processor to execute next. This is because the OS is liable to pull all sorts of memory management tricks, especially in main memory where fragmentation is a way of life. I remember that there is supposed to be a special register on the processor called the program counter. In light of the scheduler and memory management done by the OS I have trouble figuring out the purpose of this register unless it is just for the OS. Is the concept of the Stored Program Computer really relevant to how a modern computer operates?
Hardware fetches machine code from main memory, at the address in the program counter (which increments on its own as instructions execute, or is modified by executing a jump or call instruction).
Software has to load the code into RAM (main memory) and start the process with its program counter pointing into that memory.
And yes, if the OS wants to page that memory out to disk (or lazily load it in the first place), hardware will trigger a page fault when the CPU tries to fetch code from an unmapped page.
But no, the OS does not feed instructions to the CPU one at a time.
(Unless you're debugging a program by putting the CPU into "single step" mode when returning to user-space for that process, so it traps after executing one instruction. Like x86's trap flag, for example. Some ISAs only have software breakpoints, not HW support for single stepping.)
But anyway, the OS itself is made up of machine code that runs on the CPU. CPU hardware knows how to fetch and execute instructions from memory. An OS is just a fancy program that can load and manage other programs. (Remember, in a von Neumann architecture, code is data.)
Even the OS has to depend on the processing architecture. Memory today often is virtualized. That means the memory location seen by the program is not the real physical location, but is indirected by one or more tables describing the actual location and some attributes (e.g. read/write/execute allowed or not) for memory accesses. If the accessed virtual memory has not been loaded into main memory (these tables say so), an exception is generated, and the address of an exception handler is loaded into the program counter. This exception handler is by the OS and resides in main memory. So the program counter is quite relevant with today's computers, but the next instruction can be changed by exceptions (exceptions are also called for thread or process switching in preemptive multitasking systems) on the fly.
Does the question make any sense?
Yes. It makes sense to me. It is a bit imprecise, but the meanings of each of the alternatives are sufficiently distinct to be able to say that D) is the best answer.
(In theory, you could create a von Neumann computer which was able to execute instructions out of secondary storage, registers or even the internet ... but it would be highly impractical for various reasons.)
My understanding is that the OS schedules a process and manages what instructions it needs the processor to execute next. This is because the OS is liable to pull all sorts of memory management tricks, especially in main memory where fragmentation is a way of life.
Fragmentation of main memory is not actually relevant. A modern machine uses special hardware (and page tables) to deal with that. From the perspective of executing code (application or kernel) this is all hidden. The code uses virtual addresses, and the hardware maps them to physical addresses. (This is even true when dealing with page faults, though special care will be taken to ensure that the code and page table entries for the page fault handler are in RAM pages that are never swapped out.)
I remember that there is supposed to be a special register on the processor called the program counter. In light of the scheduler and memory management done by the OS I have trouble figuring out the purpose of this register unless it is just for the OS.
The PC is fundamental. It contains the virtual memory address of the next instruction that the CPU is to execute. For application code AND for OS kernel code. When you switch between the application and kernel code, the value in the PC is updated as part of the context switch.
Is the concept of the Stored Program Computer really relevant to how a modern computer operates?
Yes. Unless you are working on a special custom machine where (say) the program has been transformed into custom silicon.

List the four steps that are necessary to run a program on a completely dedicated machine—a computer that is running only that program

In my OS class, we use a text book "Operating System Concepts" by Silberschatz.
I ran into this question and answer in practice exercise and wanted to know further explanation.
Q. List the four steps that are necessary to run a program on a completely dedicated machine—a computer that is running only that program.
A.
1. Reserve machine time
2. Manually load program into memory
3. Load starting address and begin execution
4. Monitor and control execution of program from console
Actually, I don't understand the first step, "Reserve machine time". Could you explain what each step means here?
Thank you in advance.
If the computer can run only a single program, but the computer is shared between multiple people, then you will have to agree on a time that you get to use the computer to run your program. This was common up through the 1960s. It is still common in some contexts, such as very expensive super-computers. Time-sharing became popular during the 1970s, enabling multiple people to appear to share a computer at the same time, when in fact the computer quickly switched from one person's program to another.
In my opinion teaching about old batch systems in today's OS classes is not very helpful. You should use some text which is more relevant to contemporary OS design such as the Minix Book
Apart from that if you really want to learn about old systems then wikipedia has pretty good explanation.
Early computers were capable of running only one program at a time.
Each user had sole control of the machine for a scheduled period of
time. They would arrive at the computer with program and data, often
on punched paper cards and magnetic or paper tape, and would load
their program, run and debug it, and carry off their output when done

Machine Independency

Let's think about a simple C program compiled in Windows.
I can compile the program on an Intel CPU machine and run it on an AMD CPU one (same operating system). So does it mean that the instruction set of the CPU's are the same?
Why doesn't the same program run on a machine with different OS and the same CPU?
The binary setup of the object files are totally different. Also which libraries are available or how to call them.
Just compare the header of an ELF or an EXE file to see what I mean.
If you write a simple program like "main(){printf("Hello\n"); return 0;} there is a lot going on behind the scenes that are covered by the compiler to get these lines printed. Running on the same CPU doesn't help, because it could execute the assembly instructions, but it would fail horribly as soon as calling the first OS function.
To elaborate this a bit:
Just as an exmaple. Lets assume that we are running on Amiga OS with a Motorola 68000 CPU.
If I remember correctly, the calling convetions to call a system library involved loading the pointers into i.e. an adress register of the CPU and then call the OS function.
Now lets assume I write my own OS also using a Motorola 68000 CPU. However, when I design my OS, I thought it is a much better idea to use the stack for data exchange, so when you call a similar function in my own private OS, you don't pass the adress in the address register, instead you push it on the stack.
Now when your executable would be executed in my OS (supposing it could be loaded because I use the same object structure) your executable would put values in a register and my OS would try to pop them from the stack, because it doesn't know that the values it was looking for were supposed to be somehwere else.
I hope this is a bit more detailed so you can understand it, but of course the problems go much deeper then this, as this is just a tiny part of the problems involved.
Both your Intel and AMD use the x86 (or x86-64) architecture. That's why you can run the same software on both. However, the compiled program contains more than just dependencies on the architecture, it also contains dependencies on the underlying operating system. Even the binary format of a Linux executable for example is different from a Windows one.
You can however take a simple C program which uses the C standard library and compile it across different operating systems and processor architectures. As long as your code does not contain operating system dependent code, it will port across operating systems. Similarly, if your code does not rely on the underlying architecture endianess for instance, it will port across architectures.
Johan.

OS development tools: advice needed

So I got a new computer for Christmas and it came with Windows 8 pre-installed. Now I've had MORE than enough trouble getting it to run both Linux Ubuntu and W8 on the same drive. Having 2 operating systems of a single hard drive requires that the drive be partitioned so that 2 OS's do not conflict with each other. Now there is a program called Mini Partition Tool Wizard which runs inside Windows 8(and there is a similar program for Linux called gparted) which allows you to created and resize hard disk partitions so long as you don't overwrite the operating system that you're currently using.
To make a long story short: I am wanting to write my very own mini operating system that is to be used exclusively for boot control and hard disk management. That is, once I get it written, debugged, and compiled into executable code I will put it on a USB memory stick that I can boot from in the BIOS menu and then directly set up hard drive partitions and even format my hard drive if necessary. I'm quite astonished that BIOS doesn't have the user options of doing it yourself.
So my question is: Can I do this exclusively using the tools of C/C++? Or do I need to have inline assembly code? Or perhaps write an assembly code module that is used in a C++ program. Pretty sure that Mini Partition Tool Wizard is not open source(neither is Windows). Never written and OS before so I'm a n00b to this but willing and able to take the time to learn how it's done.
Can I do this exclusively using the tools of C/C++? Or do I need to have inline assembly code?
You will need some assembly and not the inline kind. Your compiled C/C++ code expects a number things to be set up and configured already (e.g. the 32-bit protected mode of the CPU, the stack, values of the various CPU registers, device drivers, interrupts, the C/C++ memory manager, etc), while the BIOS simply loads one 512-byte-long sector from a disk and transfers control to it, without setting up anything, with the CPU still being in the 16-bit mode.
So, you'd need to write some assembly code to:
load more stuff from the disk, you don't suppose everything will fit into 512 bytes, do you?
switch the CPU into the 32-bit protected mode
reconfigure the interrupt controller so the interrupts do not map onto the same interrupt vectors as protected mode exceptions (well, this tiny part can be done in C)
write exception handlers
write interrupt handlers for the basic stuff like the timer and the keyboard (if designed carefully, you may only need to do a small part of this in assembly and the rest can be done in C)
And then you'll need to write 32-bit I/O device drivers for everything else since after the switch you can't use the BIOS's. Alternatively, you could implement a virtual 8086 machine (using the virtual 8086 mode) in order to delegate this stuff back to the BIOS and that's not a trivial thing either. Most of this can be done in C, but some knowledge or use of assembly code will still be necessary.
You'll also need to reimplement some parts of the standard library of C (C++), so malloc()/new, putch(), getchar(), fopen(), time() and so on use your low-level APIs instead of Windows' or Linux'.
Prepare to burn a couple of years to get from nothing and lack of knowledge and experience to something working.
And yeah, you can indeed start learning stuff at osdev.org. There are some useful newsgroups as well: comp.lang.asm.x86 and alt.os.development.

Why an executable program for a specific CPU does not work on Linux and Windows?

An executable problem like exe does not work on Linux (without wine). When compiling source code compiler produce object code which is specific to a particular cpu architecture. But same application does not work with on an another OS with same CPU. I know code may include instructions to specific to the OS that will prevent executable running. But what about a simple program 2+2 ? Confusing part is what the hell that machine code prevents working. Machine code specific to cpu right? If we strip executable file format could we see same machine code (like 2 + 2) for both operating systems?
One more question: What about assembly language? DO windows and Linux use different assembly language for same cpu?.
There are many differences. Among them:
Executable Format: Every OS requires the binaries to conform to a specific binary format. For Windows, this is Portable Executable (PE) format. For Linux, it's ELF most of the time (it supports other types too).
Application Binary Interface: Each OS defines a set of primary system functions and the way a program calls them. This is fundamentally different between Linux and Windows. While the instructions that compute 2 + 2 are identical on Linux and Windows in x86 architecture, the way the application starts, the way it prints out the output, and the way it exits differs between the operating systems.
Yes, both Linux and Windows programs on x86 architecture use the instruction set that the CPU supports which is defined by Intel.
It's due to the difference of how the program is loaded into memory and given resources to run. Even the simplest programs need to have code space, data space and the ability to acquire runtime memory and do I/O. The infrastructure to do these low-level tasks is completely different between the platforms, unless you have some kind of adaptation layer, like WINE or Cygwin.
Assuming, however, that you could just inject arbitrary assembled CPU instructions into the code segment of a running process and get that code to execute, then, yes, the same code would run on either platform. However, it would be quite restricted, and doing complex things like even jumps to external modules would fail, due to how these things are done differently on different platforms.
Problem 1 is the image format. When an application is launched into execution the Os has to load the applicaiton image, find out its entry point and launch it from there. That means that the OS must understand the image format and there are different formats between various OS.
Problem 2 is access to devices. Once launched an application can read and write registries in the CPU and that's about it. To do anything interesting, like to display a character on a console, it needs access to devices and that means it has to ask for such access from the OS. Each Os has a different API that is offered to access such devices.
Problem 3 is priviledges instructions. The newly launched process would perhaps need a memory location to store something, can't accomplish everything with regiestries. This means it needs to allocate RAM and set up the translation from VA to physical address. These are priviledges operations only the OS can do and again, the API to access these services vary between OSs.
Bottom line is that applications are not writen for a CPU, but for a set of primitive services the OS offer. the alternative is to write the apps against a set of primitive services a Virtual Machine offers, and this leads to apps that are more or less portable, like Java apps.
Yes, but, the code invariably calls out to library functions to do just about anything -- like printing "4" to the terminal. And these libraries are platform-specific, and differ between Linux and Windows. This is why it's not portable -- not, indeed, an instruction-level problem.
Here are some of the reasons I can think of off the top of my head:
Different container formats (which so far seems to be the leading differentiator in this answer -- however its not the only reason).
different dynamic linker semantics.
different ABI.
different exception handling mechanisms -- windows has SEH -- upon which C++ exception handling is built
different system call semantics and different system calls -- hence different low-level libraries.
To the second question: Windows only runs on x86, x64, and IA64 (not sure about the mobile versions). For Linux, see here.