Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I know that AMD64 aka. x86-64 is AMD's own proprietary technology and can be licensed by 3rd parties, and they do, like Intel, VIA, etc.
I also know that the "big thing" about AMD64 ISA is that it extends the x86 ISA, thus compatibility is a fundamental advantage over the Intel's IA-64.
But (my question comes now ;)) as AMD64 relies on the basic x86 instruction set, does this mean if AMD would not get a license to that from Intel, AMD64 would be just an extension to x86 without the x86 instruction set itself, or does AMD64 "reimplement/redefine" the whole x86 ISA making the x86 license unnecessary in this regard? (I guess licensing the x86 by AMD is not just about having a complete ISA with the AMD64, so this question is just a "what if"-kind to let me better understand how AMD64 depends on or free from x86.)
If a manufacturer wants to make a CPU purely with AMD64 ISA, is it possible to make an OS that runs on it? Will it involve x86 instruction set? Or AMD64 cannot be defined without x86 so there's a bunch of basic instructions that are not part of the AMD64, thus without them there's no way a CPU can work at all?
Unlike AArch64 vs. ARM 32-bit, it's not even a separate machine-code format. I think you'd have a hard time justifying an x86-64 as separate from x86, even if you left out "legacy mode" (i.e. the ability to work exactly like a 32-bit-only CPU, until/unless you enable 64-bit mode).
x86-64's 64-bit mode uses the same opcodes and instruction formats (with mostly just a new prefix, REX). https://wiki.osdev.org/X86-64_Instruction_Encoding. I doubt anyone could argue it was substantially different from x86, or whatever the required standard is for patents. (Although patents on that might be long expired, if they were for 8086).
Especially given that long mode includes 32/16-bit "compat" sub-modes (https://en.wikipedia.org/wiki/X86-64#Operating_modes), and uses Intel's existing PAE page-table format.
But note that a lot of the patent-sharing stuff between Intel and AMD is for implementation techniques, for example a "stack engine" that handles the modification-to-stack-pointer part of push/pop/call/ret, letting it decode to 1 uop and avoiding a latency chain through RSP. Or IT-TAGE branch prediction (Intel Haswell, AMD Zen 2). Or perhaps the whole concept of decoding to uops, which Intel first did with P6 (Pentium Pro) in ~1995.
Presumably there are also patents on ISA extensions like SSE4.1 and AVX that would make unattractive to sell a CPU without, for most purposes. (SSE2 is baseline for x86-64, so you need that. Again, the instructions and machine-code formats are identical to 32-bit mode.)
BTW, you'd have to invent a way for it to boot, starting in long mode which requires paging to be enabled. So maybe boot with a direct-mapping of some range of addresses? Or invent a new sub-mode of long mode that allows paging to be disabled, using physical addresses directly.
The firmware could handle this and boot via 64-bit UEFI, potentially allowing 64-bit OSes to run unmodified as long as they never switch out of long mode.
Note that AMD, when designing AMD64, intentionally kept x86's variable-length hard-to-decode machine-code format as unchanged as possible, and made as few other changes as possible.
This means the CPU hardware doesn't need separate decoders, or separate handling in execution units, to run in 64-bit mode. AMD weren't sure AMD64 was going to catch on, and presumably didn't want to be stuck needing a lot of extra transistors to implement 64-bit mode when hardly anybody was going to take advantage of it.
(Which was definitely true even in practice for their first generation K8 chips; it was years before 64-bit Windows was common, and GNU/Linux users running still-evolving amd64 ports of distros were only a small fraction of the market back in 2003.)
Unfortunately this means that unlike AArch64, AMD64 missed the opportunity to clean up some of x86's minor warts (like providing setcc r/m32 instead of the inconvenient setcc r/m8 is my favourite example of something I would have changed for the semantics of an opcode in 64-bit mode vs. 16 and 32.)
I can see why they didn't want to totally redesign the machine-code format and need an entirely new decoding method; as well as costing silicon, that would force toolchain software (assemblers / disassemblers) to change more, instead of mostly minor changes to existing tools. That would slightly raise the barrier to adoption of their extension to x86, which was critical for them to beat IA-64.
(IA-64 was Intel's 64-bit ISA at the time, whose semantics are very different from x86 and thus couldn't even share much of a back-end. It would have been possible to redesign machine-code for mostly the same instruction semantics as x86 though. See Could a processor be made that supports multiple ISAs? (ex: ARM + x86) for more about this point: separate front-ends to feed a common back-end can totally work if the ISAs are basically the same, like just a different machine-code format for most of the same semantics.)
Related
From my research I cannot find what kernel type is being used in eCos, such as monolithic or micro-kernel. All I could find from my research is that the kernel is a real-time one or websites just describe it as the eCos kernel, does this mean it is a custom made kernel?
What I know about eCos is that it is a hard RTOS although is somewhat vulnerable in terms of security, uses priority, queue based scheduling.
A micro-kernel is:
... the near-minimum amount of software that can provide the mechanisms
needed to implement an operating system (OS). These mechanisms include
low-level address space management, thread management, and
inter-process communication (IPC).
(Wikipedia 11 Dec 2018)
The eCos kernel is described in its Reference Manual thus:
It provides the core functionality needed for developing
multi-threaded applications:
The ability to create new threads in the system, either during startup
or when the system is already running.
Control over the various threads in the system, for example
manipulating their priorities.
A choice of schedulers, determining which thread should currently be
running.
A range of synchronization primitives, allowing threads to interact
and share data safely.
Integration with the system's support for interrupts and exceptions.
It is quite clearly by comparison of these descriptions a micro-kernel. Other services provided by eCos such as file-systems, networking and device drivers are external and separable from the kernel. That is to say, you can deploy the kernel alone without such services and it remains viable.
In a monolithic kernel, these services are difficult or impossible to separate as they are an intrinsic part of the whole. Unlike eCos mand most other RTOS they do not scale well to small hardware platforms common in embedded systems. Monolithic kernels are suited to desktop and general purpose computing platforms, because the platforms themselves are monolithinc - a PC without a filesystem, display, keyboard etc, is not really viable, whereas in an embedded system that is not the case.
While Linux, and even Windows are used in embedded systems, a micro-kernel is deployable on platforms with a few tens of kilo-bytes of memory, whereas a minimal embedded Linux for example requires several mega-bytes and will include a great deal of code that your application may never use.
Ultimately the distinction is perhaps irrelevant, as is the terminology. It is what it is. You do not choose your kernel or OS on this criteria, but rather whether it provides the services you require, runs on your target, and fits in the available resource.
I think it is a monolithic kernel. If you review this page: http://ecos.sourceware.org/getstart.html
It is used instead of linux kernel and linux kernel support monolithic kernels. In addition, if it was a micro kernel , they would highlight the kernel type like QNX Kernel type which is micro kernel
I need to know Does a operating system design for specific processors category?
and also can any operating system run on any microprocessor?
Generally speaking, an operating system is not designed for a specific processor; though some do make assumptions about the hardware and computer system over all that might not be available in all systems. That said, for an operating system to run on a partical architecture, there is usually code that performs some specific, critical functions that is implemented for a specific architecture, frequently being written in assembly (I know of no OS that doesn't do this). To enable a new architecture, this code needs to be rewritten for the new machine, so that means new assembly most of the time. As mentioned in the comments, there are operating systems that only run on a single architecture like Windows, while others have these specific components for a number of architectures and thus can run on a number of processors like Linux. Note however the same exactly binary will not run across architectures, the operating system needs to be rebuilt for each architecture and possibly even for the same architecture if the system itself is different enough (as can be the case with some small MCUs).
So to answer your two questions directly: no, an OS is not uaually designed for a specific processor, and no, any OS cannot run on any processor.
Historically, operating systems have been designed for specific hardware. In some cases, such as eunuchs, the system was reworked so that it could be ported to multiple systems.
M$ ported Windoze to the Alpha processor in order to placate Digital and avoid lawsuits.
[C]an any operating system run on any microprocessor?
No.
I want to clarify before the question that I am not an established professional programmer in any position at any firm. This is solely to satisfy curiosity, and will not pertain to any task or project at this time.
As I understand it, firmware is software placed on hardware to grant it autonomous functionality from instructions, which is given through some form of input; As long as the input stream is readable, which is made possible through drivers. Drivers are software packages with pre-written reference libraries that recognize a specific set of instructions for each possible function in the attached device.
NOTE: not quoted, so I'm aware that this could be inaccurate.
What I want to know is how firmware or drivers are placed on devices without installation through an OS or storage medium; such as a DVD or USB? Specifically firmware installed by manufacturers, like bios and keyboard drivers that are present on all computers. I'm assuming these are less or not reliant on compilation in order to function properly, which is the sole reason I'm asking this question.
Can firmware be developed without compilation?
References
Demystifying Firmware
C++ Kernel Development
Starting Firmware Development
These just explain that an OS is a type of firmware, and that firmware is primarily developed in C with Assembly and C++ as plausible alternatives; pertaining to kernel development as well.
Yes, especially in the larger components. An example involving lua is http://nodelua.org/doc/index/
However, firmware development is typically an extremely memory (and frequently CPU) constrained environment.
C (or traditionally, assembler) is often preferred because it can produce extremely small executables, and is very efficient in stack usage. This matters when you're counting memory in bytes, or kilobytes.
Using a non-compiled language means you need to include a tiny interpreter, and you might not be able to set aside enough memory for this.
You've made an edit, wherein you suggest that an "OS is a type of firmware".
This can be true, in a manner of speaking.
Often firmware itself can consist of an operating system, with components. As an example, the firmware in some home internet routers will contain an OS (which might very well be linux!), however it is still regarded as firmware. There is a bit of a grey area between a computer that is an "embedded device with firmware", vs that of a 'regular computer with regular software', but generally firmware is a computer system running in a very constrained environment, often with very specific uses.
NetBSD includes Lua in it's kernel. Many systems have been developed that do not use Assembly (except for a small part of it), C, or C++, but instead use some other language - though it is typically still compiled for size and performance reasons.
As for the actual transfer of firmware (whatever the form it may be in), this depends on the device in question.
Some devices require that the firmware be burned into the components. (In ROM, though there are various types of ROM and some can be rewritten).
Other devices require that the firmware be transferred when the device is turned on.
And yet others have SDCards or battery-backed RAM or whatever that allow storing the firmware across reboots.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I heard it is possible to write an Operating System, using the built in bootloader and a kernel that you write, for the PIC microcontroller. I also heard it has to be a RTOS.
Is this true? Can you actually make an operating system kernel (using C/C++) for PIC?
If yes to 1, are there any examples of this?
If yes to 1, would you need any type of software to create the kernel?
Is Microchip the only company that makes PIC microcontrollers?
Can the PIC microcontroller be programmed on a mac?
Thanks!
Yes, you can write your own kernel (I have written 2 of my own). Yes you can write it in C for PIC. If you want pre-emptive scheduling, then you're going to have a real tough time avoiding assembly completely when writing the context switch. On the other hand, you can easily write a cooperative kernel purely in C (I have done this myself). (Note that creating an operating system is not a simple task... I'd get your feet wet in pure C first, then USE an OS or two, then try creating one.)
An excellent example of this is FreeRTOS. It has preexisting ports (i.e. MPLAB projects that run without any modification on the Explorer16 demo board) for PIC24F, PIC33F, and PIC32MX (as well as 20-some odd other official ports for other vendors' devices). PIC18F is supported but it's not pretty...
You only need MPLAB to create the kernel (free from Microchip). It can work interchangably with C and assembly. Depending on the processor, there are free versions of their C30 and C32 compilers to go with MPLAB.
PIC is a type of microcontroller, and is a trademark of Microchip. Many other companies make microcontrollers and call them something else (e.g. AVR, LPC, STM32).
Yes, the new version of MPLAB X is supported on Mac, Linux and Windows.
I suggest you check out FreeRTOS.
I second the vote for FreeRTOS; we use this all the time on PIC24 designs. The port works well and doesn't use a ton of memory.
Microchip supports many third party RTOSes.
Most have free demo projects that you can download, build in MPLAB, and program onto an Explorer16 board very easily. You can then experiment to your heart's content.
PIC is not a single architecture. PIC10 differs considerably from PIC24, though they and every PIC in between share some commonality. The MIPS based PIC32 on the other hand is an entirely different architecture. So you have to be clear about what PIC you are referring to.
An OS on a PIC does not have to be and RTOS, but that would be ideally suited to the application domain the devices are used in, so anything that were not real-time capable would be somewhat less useful.
There are many RTOS ports already for PIC.
There is nothing special about about a kernel scheduler in terms of development method, C and in most cases a little assembler are all that are necessary - no special tools. You could use 100% assembler if you wished, and this might be necessary to get the smallest/fastest code, but only if your assembler knowledge is better than the compiler's.
PIC is specific to Microchip, though Parallax SX is more or less a clone. Unlike ARM for example, Microchip do not licence the architecture to third-party chip manufacturers or IP providers. No one would want it in any case IMO; there are far better architectures. ARM Cortex-M is particularly suited to RTOS kernel implementation, and AVR's instruction is designed for efficient translation from C source code. Even the venerable 8051 is well suited to RTOS implementation; its eight register banks make context switches very fast (for up to eight threads), and like ARM, 8051 architecture devices are available from multiple manufacturers.
The hardware stack of PIC 18F CPU is only 31 bytes long. Other RAM memory cannot be used as stack. Even 8051 IRAM memory has 128 byte of stack. I have done RTOS for 8051, ARM and PIC 18F, and feels not good at PIC 18F. If the RAM(16K to 64K) of PIC32 can be used as stack, if the stack pointer is 16 bit long, it will be much better than PIC18F types. Does any one knows that?
I wonder how virtualization software such as VirtualBox or VMWare Workstation works? How can they create a virtual environment that is taken as a separate computer by operating systems? I'm almost sure the answer to this question is very deep, but I'd be well satisfied with basic theory.
How does VMWare work:
http://www.extremetech.com/article2/0,2845,1624080,00.asp
How does virtualizaton work:
http://blog.tmcnet.com/voip-enterprise/tmcnet/how-does-virtualization-work-and-why-is-now-a-good-time-to-check-it-o.asp
Server Virtualization FAQ
http://www.itmanagement.com/faq/server-virtualization/
In the simplest sense, a virtualised environment is to a native environment, what an interpreted language, like PHP, Javascript or Basic, is to a compiled language like C, C++ or assembler.
When a compiled binary executes, the binary machine code, is passed straight to the CPU. However when an interpreted language runs, the language application reads in the code, then it decides what that meant and execute binary procedures to reflect that.
So virtualisation software like Qemu, while compiled to run on, say an x86 processor, will read the binary file, intended for say a Mac, and it will interpret the binary it receives, switch it from big, to little endian, and then know that op code X on mac corresponds to op code Y on x86, and that op code A on mac, doesn't have an equivalent on x86, so will need to call function B on x86 and so on.
The really clever bit, is the hardware interpretation, where someone has to write a driver, that will run on Qemu, on x86, but will present an interface to the Mac face of Qemu, to make Mac applications think they're talking to Mac hardware.
In the most basic sense, virtualization software puts a computer within another computer... kind of. (Here's a link that's very, very basic: http://blog.capterra.com/virtualization-software)
In a more complex sense, virtualization software (also called a hypervisor) abstracts the characteristics of a server. This allows several OSs to run on a single physical server.