Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm told, that Windows NT was first designed to implement the microkernel architecture, but moved away to hybrid kernel.
What caused the change? I'm having trouble trying to find any info about this.
The main reason that Windows NT became a hybrid kernel is speed. A microkernel-based system puts only the bare minimum system components in the kernel and runs the rest of them as user mode processes, known as servers. A form of inter-process communication (IPC), usually message passing, is used for communication between servers and the kernel.
Microkernel-based systems are more stable than others; if a server crashes, it can be restarted without affecting the entire system, which couldn't be done if every system component was part of the kernel. However, because of the overhead incurred by IPC and context-switching, microkernels are slower than traditional kernels. Due to the performance costs of a microkernel, Microsoft decided to keep the structure of a microkernel, but run the system components in kernel space. Starting in Windows Vista, some drivers are also run in user mode.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am trying to understand, what is an OS image and VM image and how they are different?
An OS Image commonly refers to a collection of the programs and data files needed to make an operating system functional. That is a minimal definition; but an image needn't be minimal.
A Virtual Machine image commonly refers to all of the state: memory, device registers, etc... of a virtual machine. In contrast to an OS Image, a Virtual Machine can be restarted after halting; whereas an OS Image restarts from the beginning. A system image commonly refers to the equivalent of a virtual machine image for a real hardware machine.
Why do these terms exist? When an operating system starts, there is little to no functioning system software on the target machine; so the first level of starting (bootstrapping) is to put some lump of something into RAM, and start executing it. That lump might be an operating system, or may be a small intermediate system that will then load the actual operating system (or load yet another boot loader). An example is grub or u-boot.
An intermediary system may be more functional than to just load a lump and jump. It might understand file systems, and be able to parse device database; thus construct an appropriate OS image for the target hardware on the fly. The division of labour is a compromise chosen by the system designer.
Intel based systems add an incredibly complex intermediary into all of this with a system called ACPI which sits underneath the Extensible Firmware Interface. The A in ACPI stands for Advanced, I suppose new and improved was too transparent.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Some media have reported that a new hardware bug in Intel processors, allowing user mode processes to access kernel mode memory, has been discovered:
It is understood the bug is present in modern Intel processors
produced in the past decade. It allows normal user programs – from
database applications to JavaScript in web browsers – to discern to
some extent the layout or contents of protected kernel memory areas.
The effects [of fixes] are still being benchmarked, however we're
looking at a ballpark figure of five to 30 per cent slow down,
depending on the task and the processor model.
After the bug is fixed, which slowdown am I to expect for multicore floating point computations?
To my understanding, only the performance of switches between kernel and user mode are affected. For example, handling a lot of I/O is a workload where this is frequent, but CPU-intensive processes should not be affected as much.
To quote from one article that analyzes performance of the Linux KPTI patch:
Most workloads that we have run show single-digit regressions. 5% is a good round number for what is typical. The worst we have seen is a roughly 30% regression on a loopback networking test that did a ton of syscalls and context switches.
...
So PostgreSQL SELECT command is about ~20% slower with KPTI workaround, and I/Os in general seem to be impacted negatively according to Phoronix benchmarks especially with fast storage, but not gaming performance, Linux kernel compilation, H.264 encoding, etc…
Source: https://www.cnx-software.com/2018/01/03/intel-hardware-security-bug-fix-to-hit-performance-on-windows-linux/
So, if your FP computations rely mostly on in-memory data shifting and not I/O, they should be mostly unaffected.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have an application that formerly was running on an itanium suse 11,
I'm wondering to now Can I freely choose suse 11 intel support on new intel cpu?
what I mean is that , is there any chance that application affect by changing cpu type but not the operating system it use?
The compiled binaries will not run on both x86 and arm. However, it is very possible to compile binaries for each platform from the same source code. How likely this is depends on too many factors to list here; you will have to try compiling it yourself. Depending on the language, cross architecture compilation will have varying difficulty. With Java or Python or something similar, the architecture is unlikely to cause problems as long as your are on the same OS.
Link to a GGC-centric guide.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I want to know if it is possible simulate a SO inside of a T5220.
Because two servers are needed with different specific software that works on a T5220, but there is only one T5220 physically with one hard disk and the software works on solaris 10.
I am new in this kind of themes, but simulate this kind of platform in an x86 architecture is possible? Because there are servers of this kind available for this use.
I am seeking for all kind of options.
The software also is compatible with the next platforms: SunFire V440, Netra T2000, Netra 440 and Sun Fire X4270 M2. Any of those can be simulated? and if it is possible, what do i need?
You can create both multiple logical domains and multiple zones on a T5220.
With logical domains (Oracle VM for SPARC), you simulate different physical machines, each one with its own Operating System.
With zones, you have OS level virtualization including the ability to simulate older Solaris releases with S9 and S8 branded zones.
On the other hand, I'm not aware of any usable and current Solaris on SPARC emulation layer available for x86.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The question actually is :
If I have some processor named x
I have 3 Operating Systems named a,b,c
Now how can I decide that which operating system controls the processor ?
What is the basic understanding between the processor and Operating system ?
And the above 3 operating systems are not of different versions from same company..
To be specific how Android hardware is different from iPhone hardware and why can't iOS be installed on Android hardware...??
Thanking you..
You are actually asking a question which shows your lack of understanding of OS, CPU and other technical terminologies.
Actually, You need to study the basic of Os and CPU to get a deep understanding on the topic. But, I will definitely help you out by defining both the terminologies.
Your computer's operating system has two main objectives in its management of the central processing unit, or CPU. First, the OS makes sure that as many processor cycles are used for work as possible.
Second, the OS schedules the processor's attention among the demands of different processes. Processes are actions that can be controlled and are the basic units of software with which the OS communicates. A process may be a task, such as a virus check, that runs in the background so you never even know it's working. It also may be one of several tasks that an application, such as a spreadsheet, executes at your request. In a multitasking OS, the OS has to switch the processor's attention between competing processes many times per second because the processor can only do one thing at a time.
Briefly summarizing :
A processor is the 'engine' of the computer - it runs all the software and moves data around. The best processor in general has more cores (core i7), and a higher speed.
An operating system is the 'traffic cop' of all the software on the computer - it's software that controls how all the other programs on the computer work together and share the resources of the computer.
Hope you have got an idea :)