We can use standard C library routines in standalone micro-controller programming but we cant in linux kernel. my question is, in both cases while they are running on the target hardware both of them dont have access to the libC this explains the "linux kernel" case but what happens when it comes to the standalone application on some micro controller(say MSP430).
Programs running on embedded controllers typically are linked with a (suitably small) version of the standard library. The kernel is not linked to the standard C library, thus it can't be used.
Related
I am relatively new to UEFI development and exploring various options to make UEFI development similar to normal C/C++ programming or at least much closer to it.
I found some wonderful work done by amazing people.
Visual UEFI : https://github.com/ionescu007/VisualUefi (Making UEFI EDK2 based development fun)
gnu-efi : https://sourceforge.net/projects/gnu-efi/
Toro-C : https://github.com/KilianKegel/toro-C-Library (Standard C Library porting)
EDK2-LibC : https://github.com/tianocore/edk2-libc (Standard C Library support in EDK2)
I am able to build simple programs with main() and printf() using EDK2-LibC
Including edk2-libc in efi shell application
ToroC also offers something similar although it may not have as extensive support as EDK2-LibC.
However, these Standard Library supported programs don't work when I directly Boot to them (Like Boot to Hello.efi by changing Boot Options and Boot Order) so I understand that it would require EFI Shell support. Correct me if I am wrong.
Is there any way to boot-in to an EFI application that is compiled with EDK2-LibC or Toro-C?
I am New to VxWorks. First of all Can I use VxWorks OS as Normal OS on my PC? I mean Can I run my application software on VxWorks OS?
While VxWorks can run on PC hardware, it is not a general-purpose OS for running independent executables. VxWorks is an RTOS library; you statically link it to your application and the whole runs as a monolithic executable.
It does support a command line interface (intended primarily for development and debug), and from that it is possible to dynamically load and link object files, but these are not independent executables in the sense they are in a GPOS; they essentially become part of the monolithic application.
An RTOS such as OS/9 or QNX would be more suited as these can operate more like a GPOS in the sense of loading and executing independently linked executables.
In any event, application software must be specifically built for these targets.
For versions of VxWorks prior to VxWorks 6, the answer by clifford provides a good explanation of why this is not really possible.
VxWorks 6 introduced Real Time Processes (RTPs). These are independant, user mode applications, running on top of the VxWorks OS. Dependant on how the VxWorks OS has been configured and built, these RTP applications may have access to POSIX libraries, and so you may be able to run POSIX applications (eg linux programs) with little modification.
However, these must still be built for VxWorks, ideally linked against your own VxWorks Source Build.
You cannot, however, just pick up any old application and expect it to run. You are never going to get Word or Excel to run.
The questions I would like to ask are:
1) What exactly does hypervisor do? Why is it needed?
2) What is the difference between hypervisor and Java Virtual mMchine?
3) Does JVM use a hypervisor?
4) When a host operating system like linux can handle multiple guest operating system,why use hypervisor?
Would be great help if someone shed light on this
A Hypervisor also known as hardware virtualization are a virtualization layer that allows running one or more native operating system on top of it, as if they run on a physical machine. It is similar to emulation but only runs operating systems that would be able to run without the Hyperviser, which are much faster.
Both are virtualization layers. However Java are optimized for performance and portability. While Java are technicaly an emulator, it are much faster than an hyperviser. This can be achieved because the emulated platform are designed for fast emulation. Java do not run x86 or x86_64/amd64 code, it runs something called Bytecode. The technical term for Bytecode are Intermediate Language (IL). It are compiled to code that are native to your processor when you run it, by the Just In Time compiler (JIT). As the JIT do a compilation process it can make sure that the program follows Java:s security constraints, by simply not generating code that violates such constraints. The Hyperviser enforce security constraints by intercepting so called privileged instructions and by emulating devices such as disk drives. This are done because native x86 or x86_64/amd64 code are very hard for a program to understand, and changing it so that it self-enforce security constraints are next to impossible. Java on the other hand runs Bytecode which are easy for a program to understand and chance so that it self-enforce security rules.
The short answer: An hyperviser are slower than Java but allows you to run a multitude of complete operating systems, and all the software available for them. This while Java are faster, but you can only run Java software on it. If you want to run Windows and Office in your virtual machine, you can't do that in Java.
I think I answered this above but no, it use code inspection and modifies the program so that it self-enforce security rules. This can be done because runnable Java application are in a intermediate state called Bytecode, which are easy for Java to understand, inspect, find code that may violate the rules and modify it in order to obey them. This are a rather complex process that have several advantage over hypervisor. The first advantage are "compile once run everywhere", as Java are compiled and distributed as bytecode. The second advantage are speed, JIT:ed code have the same speed as non-virtualized code even when strict security are enforced. The disadvantage are that only Bytecode programs can run, so you for example cannot run Windows or Linux inside the virtual machine.
If you are running another operating system like Windows or another Linux distribution - you are running an Hyperviser. KVM, Xen and VirtualBox are examples of Hypervisors. You can also run multiple instances of Linux with one shared kernel, known as OS-based virtualization or "Container". But a Container share the kernel and therefor you can only virtual machines with the OS you are running. The advantage with Containers it are more lightweight as you do not need to run multiple kernels on top of each other...
Hypervisor or virtual machine manager, is a program that allows multiple operating systems to share a single hardware host.
JVM or Java Virtual Machine interprets bytecode for a computers processor so that it can perform Java program instructions.
No JVM does not use hypervisor as it is not an virtual machine that runs an OS rather it is just a interpreter.
A host operating system manages different VMs using Hypervisor or virtual machine manager
Before answering your questions, I would recommend you search related entries in wikipedia. A hypervisor is used to run multiple guest OSes while JVM is used to interpret java byte code. JVM runs on top of OS and it doesn't care whether the OS runs on top of bare metal or on a hypervisor. Actually, linux can handle multiple guest operating systems with KVM which is part of the linux kernel. So the description of the last question is totally wrong.
I just want know why some games are only Windows-based and won't run on other OSs like Mac OS X and Linux. What makes them different, and how does the program know that the OS is Windows, Linux, or Mac?
Also, similarly, why won't a Windows 7 32-bit driver work on 64-bit and vice versa?
Besides how Mac and Linux use a different executable type (they use Mach-O and ELF, as far as I've seen) than Windows (PE), if the executable loader was able to parse everything and load it into memory, many things can go wrong. Library calls, such as printf(3) rely on underlying system calls, which calls the kernel of the OS. In the case of printf(3), it calls fstat(2), sbrk(2) and write(2). (Note that this is the case under the newlib library; I am unsure about the other standard C libraries.) As far as you know, the system call interface for Windows can be very different from the one Linux uses, and Windows may even be missing a few system calls that Linux has (like fork(2))
How compatible is code written under Solaris with Open Solaris ? I would be interested specifically in some kernel modules .
I think it is hard to quantify software compatibility, but I'd say code written for Solaris is quite forward compatible with OpenSolaris kernel. OpenSolaris source code evolves into what will be Solaris 11, and Sun's commitment to backwards compatibility is quite a fact.
Kernel modules written for Solaris should function in OpenSolaris following a simple recompile providing you are using the exposed kernel APIs that are compatible between the releases that you are using in Solaris and OpenSolaris.
There is a huge amount of work in Sun to ensure that programs written using publicly exposed interfaces are compatible. There is a listed 'Exposure/Stability' entry at the bottom of manual pages for most APIs that state in defined terms how someone can use it.
Kernel modules in particular will be very compatible between Solaris and OpenSolaris. OpenSolaris (via Project Indiana) is evolving the user-space components more heavily, including the installer and packages.
This is with regard to core OS daemons only and not kernel modules, but I've had success compiling OpenSolaris components from source and using the resulting binaries on commercial Solaris just fine. It's obviously easier with a Makefile but I did one manually.
I tried this with a small handful of binaries that I needed to add debugging output to and compiled them directly on the commercial Solaris system using gcc without issue. As mentioned earlier YMMV based on what app/module it is.