My question is quite simple.
I encountered this sys_vm86old syscall (when reverse engineering) and I am trying to understand what it does.
I found two sources that could give me something but I'm still not sure that I fully understand; these sources are
The Source Code and this page which gives me this paragraph (but it's more readable directly on the link):
config GRKERNSEC_VM86
bool "Restrict VM86 mode"
depends on X86_32
help:
If you say Y here, only processes with CAP_SYS_RAWIO will be able to
make use of a special execution mode on 32bit x86 processors called
Virtual 8086 (VM86) mode. XFree86 may need vm86 mode for certain
video cards and will still work with this option enabled. The purpose
of the option is to prevent exploitation of emulation errors in
virtualization of vm86 mode like the one discovered in VMWare in 2009.
Nearly all users should be able to enable this option.
From what I understood, it would ensure that the calling process has cap_sys_rawio enabled. But this doesn't help me a lot...
Can anybody tell me ?
Thank you
The syscall is used to execute code in VM86 mode. This mode allows you to run old "real mode" 32bit code (like present in some BIOS) inside a protected mode OS.
See for example the Wikipedia article on it: https://en.wikipedia.org/wiki/Virtual_8086_mode
The setting you found means you need CAP_SYS_RAWIO to invoke the syscall.
I think X11 especially is using it to call BIOS methods for switching the video mode. There are two syscalls, the one with old suffix offers less operations but is retained for binary (ABI) compatibility.
Related
Gem5 se mode is non-os mode, but i am able to execute row-hammer code on it which has commands with os dependencies.But if there is no os in se mode then how are they executed in se mode.
Most userland allowed instructions just do the usual thing, which is to change the state of the the CPU slightly: touch registers + cache + memory.
Then when a syscall instruction is reached, the syscall is forwarded to the host which actually takes action.
However, this also requires some extra bookkeeping by the OS, which is why every single syscall must be implemented separately.
If I wanted to learn this :-) I would look at the implementation of a simple syscall like brk:
https://github.com/gem5/gem5/blob/5d442571eff5116551609ee7a3b63a3b9d27ff45/src/arch/x86/linux/process.cc#L223
https://github.com/gem5/gem5/blob/5d442571eff5116551609ee7a3b63a3b9d27ff45/src/sim/syscall_emul.cc#L212
I would also look into QEMU user mode, I think it will be a similar concept there, but with potentially more material available.
Maybe someone with a better understanding can explain further in more detail, and annotate specific parts of the code further.
I'm writing some code that is going to run under an hypervisor which only allows open, read, write and close syscalls to the external world.
Since part of the code is dependant on the platform it is being run on, I'd like to be able to automatically choose the appropriate code-path at runtime.
What is the most robust way of detecting the operating system using only these syscalls? I'm primarily interested in detecting windows and linux, but osx would be useful too.
uname watches for /proc/sys/kernel/ostype, you could use that.
For windows, it's worse. In theory, you should have C:\windows\system32\kernel32.dll. The problem is, however, that windows installation root is not required to be 'C:\' (although it's very common) - so i highly doubt ordinary open() could be considered reliable.
So I got a new computer for Christmas and it came with Windows 8 pre-installed. Now I've had MORE than enough trouble getting it to run both Linux Ubuntu and W8 on the same drive. Having 2 operating systems of a single hard drive requires that the drive be partitioned so that 2 OS's do not conflict with each other. Now there is a program called Mini Partition Tool Wizard which runs inside Windows 8(and there is a similar program for Linux called gparted) which allows you to created and resize hard disk partitions so long as you don't overwrite the operating system that you're currently using.
To make a long story short: I am wanting to write my very own mini operating system that is to be used exclusively for boot control and hard disk management. That is, once I get it written, debugged, and compiled into executable code I will put it on a USB memory stick that I can boot from in the BIOS menu and then directly set up hard drive partitions and even format my hard drive if necessary. I'm quite astonished that BIOS doesn't have the user options of doing it yourself.
So my question is: Can I do this exclusively using the tools of C/C++? Or do I need to have inline assembly code? Or perhaps write an assembly code module that is used in a C++ program. Pretty sure that Mini Partition Tool Wizard is not open source(neither is Windows). Never written and OS before so I'm a n00b to this but willing and able to take the time to learn how it's done.
Can I do this exclusively using the tools of C/C++? Or do I need to have inline assembly code?
You will need some assembly and not the inline kind. Your compiled C/C++ code expects a number things to be set up and configured already (e.g. the 32-bit protected mode of the CPU, the stack, values of the various CPU registers, device drivers, interrupts, the C/C++ memory manager, etc), while the BIOS simply loads one 512-byte-long sector from a disk and transfers control to it, without setting up anything, with the CPU still being in the 16-bit mode.
So, you'd need to write some assembly code to:
load more stuff from the disk, you don't suppose everything will fit into 512 bytes, do you?
switch the CPU into the 32-bit protected mode
reconfigure the interrupt controller so the interrupts do not map onto the same interrupt vectors as protected mode exceptions (well, this tiny part can be done in C)
write exception handlers
write interrupt handlers for the basic stuff like the timer and the keyboard (if designed carefully, you may only need to do a small part of this in assembly and the rest can be done in C)
And then you'll need to write 32-bit I/O device drivers for everything else since after the switch you can't use the BIOS's. Alternatively, you could implement a virtual 8086 machine (using the virtual 8086 mode) in order to delegate this stuff back to the BIOS and that's not a trivial thing either. Most of this can be done in C, but some knowledge or use of assembly code will still be necessary.
You'll also need to reimplement some parts of the standard library of C (C++), so malloc()/new, putch(), getchar(), fopen(), time() and so on use your low-level APIs instead of Windows' or Linux'.
Prepare to burn a couple of years to get from nothing and lack of knowledge and experience to something working.
And yeah, you can indeed start learning stuff at osdev.org. There are some useful newsgroups as well: comp.lang.asm.x86 and alt.os.development.
Due to the need to run a 15+ year old application I wish to create a watchdog program to ensure a 16 bit application is running on a 32 bit version of Windows XP Pro and start it if necessary. Normally I'd use EnumWindows() to look for the application's window. Unfortunately, this doesn't work, or at least not reliably, for 16 bit apps.
Given that I have absolutely no control over the code in the application in question, how can I reliably detect whether or not it's running? I'm coding this in C (not C++ or C#).
You'll probably need to enumerate processes (e.g., EnumProcesses with GetModuleFilenameEx or CreateToolhelp32Snapshot with Process32First and Process32Next). If you don't find an instance of ntvdm.exe, then no 16-bit process is running, and you can stop there. If you do find an instance of ntvdm.exe, you can use VDMEnumTaskWOWEx to enumerate the 16-bit processes running in it.
Back when it was still under the original owners, I posted an article on CodeGuru demonstrating how to do all of this. It'd take a bit of work to make it compile under a modern compiler (e.g., it's old enough that it uses iostream.h instead of iostream, but the process enumeration part should still be pretty much right (though, looking at things, you'll also need to enable the SeDebugPrivilege for it to work on Windows 7).
An executable problem like exe does not work on Linux (without wine). When compiling source code compiler produce object code which is specific to a particular cpu architecture. But same application does not work with on an another OS with same CPU. I know code may include instructions to specific to the OS that will prevent executable running. But what about a simple program 2+2 ? Confusing part is what the hell that machine code prevents working. Machine code specific to cpu right? If we strip executable file format could we see same machine code (like 2 + 2) for both operating systems?
One more question: What about assembly language? DO windows and Linux use different assembly language for same cpu?.
There are many differences. Among them:
Executable Format: Every OS requires the binaries to conform to a specific binary format. For Windows, this is Portable Executable (PE) format. For Linux, it's ELF most of the time (it supports other types too).
Application Binary Interface: Each OS defines a set of primary system functions and the way a program calls them. This is fundamentally different between Linux and Windows. While the instructions that compute 2 + 2 are identical on Linux and Windows in x86 architecture, the way the application starts, the way it prints out the output, and the way it exits differs between the operating systems.
Yes, both Linux and Windows programs on x86 architecture use the instruction set that the CPU supports which is defined by Intel.
It's due to the difference of how the program is loaded into memory and given resources to run. Even the simplest programs need to have code space, data space and the ability to acquire runtime memory and do I/O. The infrastructure to do these low-level tasks is completely different between the platforms, unless you have some kind of adaptation layer, like WINE or Cygwin.
Assuming, however, that you could just inject arbitrary assembled CPU instructions into the code segment of a running process and get that code to execute, then, yes, the same code would run on either platform. However, it would be quite restricted, and doing complex things like even jumps to external modules would fail, due to how these things are done differently on different platforms.
Problem 1 is the image format. When an application is launched into execution the Os has to load the applicaiton image, find out its entry point and launch it from there. That means that the OS must understand the image format and there are different formats between various OS.
Problem 2 is access to devices. Once launched an application can read and write registries in the CPU and that's about it. To do anything interesting, like to display a character on a console, it needs access to devices and that means it has to ask for such access from the OS. Each Os has a different API that is offered to access such devices.
Problem 3 is priviledges instructions. The newly launched process would perhaps need a memory location to store something, can't accomplish everything with regiestries. This means it needs to allocate RAM and set up the translation from VA to physical address. These are priviledges operations only the OS can do and again, the API to access these services vary between OSs.
Bottom line is that applications are not writen for a CPU, but for a set of primitive services the OS offer. the alternative is to write the apps against a set of primitive services a Virtual Machine offers, and this leads to apps that are more or less portable, like Java apps.
Yes, but, the code invariably calls out to library functions to do just about anything -- like printing "4" to the terminal. And these libraries are platform-specific, and differ between Linux and Windows. This is why it's not portable -- not, indeed, an instruction-level problem.
Here are some of the reasons I can think of off the top of my head:
Different container formats (which so far seems to be the leading differentiator in this answer -- however its not the only reason).
different dynamic linker semantics.
different ABI.
different exception handling mechanisms -- windows has SEH -- upon which C++ exception handling is built
different system call semantics and different system calls -- hence different low-level libraries.
To the second question: Windows only runs on x86, x64, and IA64 (not sure about the mobile versions). For Linux, see here.