Can user code create a new memory segment? - x86-64

I've been researching about memory segmentation, and it's been giving me lots of ideas for potential ways in which user code could benefit from swapping out segment registers. Specifically, I am interested in the x86-64 architecture because that's what I have.
Is there any way in which a user-mode program can create a new segment, for internal use?
To what extent can a program configure its own address space?
I imagine the GDT is way outside of a process' reach, but can a process modify the LDT?
Sorry if I sound naive, this is new stuff for me.
I also imagine that, if there even is a way to do this, it will almost certainly pass through OS specific functions. How would one do this in, say, Win32? I have found GetThreadSelectorEntry (https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-getthreadselectorentry), but I haven't been able to find the equivalent SetThreadSelectorEntry.

Related

Using EEPROM in STM32f10x

I'm using STM32f103 and in my program, I need to save some bytes in the internal flash memory. But as far as I know, I have to erase a whole page to write in it, which will take time.
This delay causes my display to blink.
Can anybody help me to save my data without consuming so much time?
Here is a list that may help:
1- MCU: STM32f103
2- IDE: Keil vision
3- using HAL driver provided by STM32CubeMx
4- sample data for saving in Flash: {0x53, 0xa0, 0x01, 0x54}
In the link below, you can find the code that I'm using.
FLASH_PAGE for Keil
The code you provide doesn't seem to be implemented well. It basically does 2 things each time you initiate a write operation:
Erase the page (this is the part that takes time)
Start form the given pointer, write until it hits a zero.
This is a very ineffective way of using the flash.
Probably the simplest and the most well-known way is to use the method described in ST's AN2594, although it has some limitations.
Still, at some point a page erase will be necessary regardless the method you use and there is no way to avoid some delay, unless your uC supports dual flash banks (STM32F103 don't have this feature). You need to plan the timing of flash writes and display refresh accordingly. If you need periodic writes to the flash, there is probably some high level error in your design.
To solve this problem, I used another library that STM itself presented. I had to include "eeprom.h" into your project and then add "eeprom.c" to it. You can easily find these files on the Internet.

Difference between shared memory IPC mechanism and API/system-call invocation

I am studying about operating systems(Silberscatz, Galvin et al). My programming experiences are limited to occasional coding of exercise problems given in a programing text or an algorithm text. In other words I do not have a proper application programming or system programming experience. I think my below question is a result of a lack of experience of the above and hence a lack of context.
I am specifically studying IPC mechanisms. While reading about shared memory(SM) I couldn't imagine a real life scenario where processes communicate using SM. An inspection of processes attached to the same SM segment on my linux(ubuntu) machine(using 'ipcs' in a small shell script) is uploaded here
Most of the sharing by applications seem to be with the X deamon. From what I know , X is the process responsible for giving me my GUI. I infered that these applications(mostly applets which stay on my taskbar) share data with X about what needs to change in their appearances and displayed values. Is this a reasonable inference??
If so,
my question is, what is the difference between my applications communicating with 'X' via shared memory segments versus my applications invoking certain API's provided by 'X' and communicate to 'X' about the need to refresh their appearances?? BY difference I mean, why isn't the later approach used?
Isn't that how user processes and the kernel communicate? Application invokes a system call when it wants to, say read a file, communicating the name of the file and other related info via arguments of the system call?
Also could you provide me with examples of routinely used applications which make use of shared memory and message-passing for communication?
EDIT
I have made the question more clearer. I have formatted the edited part to be bold
First, since the X server is just another user space process, it cannot use the operating system's system call mechanism. Even when the communication is done through an API, if it is between user space processes, there will be some inter-process-communication (IPC) mechanism behind that API. Which might be shared memory, sockets, or others.
Typically shared memory is used when a lot of data is involved. Maybe there is a lot of data that multiple processes need to access, and it would be a waste of memory for each process to have its own copy. Or a lot of data needs to be communicated between processes, which would be slower if it were to be streamed, a byte at a time, through another IPC mechanism.
For graphics, it is not uncommon for a program to keep a buffer containing a pixel map of an image, a window, or even the whole screen that then needs to be regularly copied to the screen. Sometimes at a very high rate...30 times a second or more. I suspect this is why X uses shared memory when possible.
The difference is that with an API you as a developer might not have access to what is happening inside these functions, so memory would not necessarily be shared.
Shared Memory is mostly a specific region of memory to which both apps can write and read from. This off course requires that access to that memory is synchronized so things don't get corrupted.
Using somebody's API does not mean you are sharing memory with them, that process will just do what you asked and perhaps return the result of that operation to you, however that doesn't necessarily go via shared memory. Although it could, it depends, as always.
The preference for one over another I'd say depends on the specifications of the particular application and what it is doing and what it needs to share. I can imagine that a big dataset of some kind or another would be shared by shared memory, but passing a file name to another app might only need an API call. However largely dependent on requirements I'd say.

Programming on real-time system

My problem is understanding programming on real-time system. I'm confuse about this topic. What can I do and what I can not do in my source code? I know there are attensions to do during source code programming but I don't know exactly what. Some examples. Is possibile using dynamic memory allocation(new)? Is possible access to disk during real-time? What kind of IPC(Interprocess communication) can I use? Can I use standard interprocess locking? And what is with file locking? I have searched on internet but didn't find what I want. Where can I better understand this problems? I hope someone can help me. Sorry for my english!
You can do whatever your language/compiler of choice supports.
What you should do now, it really depends on what's the target system, what is your program (you could be writing an OS for all I know), etc...
Realtime system is all about determinacy - fixed timing for each . Check this out for some guidelines:
http://cs.brown.edu/~ugur/8rulesSigRec.pdf
What defines a real-time/near-real time system?
On the software side (your focus):
a. Avoid buffering or caching in your code. Caching are meant to speed up subsequent processing after the first, but then this result in indeterminacy of timing.
b. Minimize conditional branching, as it will generate different path resulting in different timing, this is especially important for the time-sensitive component.
c. Avoid asynchronous, or interrupt based design. Use polling whenever possible - that will increase the predictability of the timing.
d. Use a realtime OS (like LynxOS RTOS) whenever possible. It has high responsiveness and predictability in its processing. But if you look at its internals, you will see that it skips a lot of error processing, it has low threshold for maximum numbers of processes it can spawn etc. Ie, there is a lot of spare CPU computing power leftover always, to ensure that the responsiveness is there. Of course, the moment you pushed the numbers to its limits (eg, spawning lots of processes) the realtime behavior of LynxOS does not exhibit anymore.
Just lots of commonsense applied when you do coding.....

How do I configure an ATA hard disk to start generating interrupts?

RESOLVED
After much confusion and frustration, I finally got my hard disk to interrupt. :D It basically came down to the fact that I kept reading the status register instead of the alternate status register. A few other things were messed up to boot, but the point is my hard disk driver is finally starting to take shape. Now, for others I will leave the original post.
P.S. For further clarification, I didn't need to issue any sort of reset command. All I did was the following:
Select the device (didn't want to kill the Solaris OS on the other disk)
clear the nIEN bit in the DEVICE CONTROL register
issue an IDENTIFY DEVICE command***
Actually, I am not sure if the IDENTIFY DEVICE command is need because I left the lab happy before I could test the code without issuing the command. However, the main point is that I needed to be sure to read the alternate status register and have the nIEN bit cleared without the need for a reset. The BIOS apparently takes care of most stuff.
I am currently trying to write a disk driver for a hobby OS being developed at my school. I currently have routines to read/write data in the PCI configuration space and assembly routines to do port IO with the various registers defined by ATA/ATAPI-7. Now, my question is, specifically how will I get an IDE hard drive to start generating interrupts? I have been looking through all this documentation and is hasn't become clear to me what I am doing wrong.
Can someone explain exactly what causes an IDE hard drive to start generating interrupts? I already have an interrupts service routine ready to test, but am having difficulty getting the interrupts in the first place. Can this be accomplished through the ATA SOFT RESET?
Thanks!
UPDATE: Ok, I was able to get the secondary channel, an ATAPI CDROM to generate interrupts by setting the SRST bit in the DEVICE CONTROL register for a soft reset. This does not work for the hard disk on the primary channel. What I have noticed so far is that when I set the SRST bit for the HDD, it sets the BSY bit and leaves it set. From there I don't know what to do.
This reference should help you a fair bit: Kenos description of programming ATA/ATAPI.
The basic mechanism to enable interrupts is to clear nIEN in the DCR (Device Control Register):
nIEN: Drive Interrupt Enable bit. The enable bit for the drive interrupt to the host. When nIEN is 0 or the drive is selected the host interrupt signal INTRQ is enabled through a tri state buffer to the host. When nIEN is 1 or the drive is not selected the host interrupt signal INTRQ is in a high impedance state regardless of the presence or absence of a pending interrupt.
This www.ata-atapi.com is a good jumping-off point to find way more info about ATA/PATA/SATA/ATAPI than you want to know... Note that the official ATA-6/7/etc specs cost $$ from T13, though you can download current drafts of ATA-8 from them.
This link describes a few of the many ways ATA devices vary from the specs. (I used to write SCSI and ATA/ATAPI drivers for Commodore/Amiga, way back when, as well as help with qualifying drives - or more accurately, figuring out what idiocies drive makers had done.)
if this is just a hobby OS, why not use the BIOS interrupt (int 13h)? admittedly not as fast as direct disk access but safer for your hard drive (I've put a read head through a plate before messing with disk I/O).

How are Operating Systems "Made"?

Creating an OS seems like a massive project. How would anyone even get started?
For example, when I pop Ubuntu into my drive, how can my computer just run it?
(This, I guess, is what I'd really like to know.)
Or, looking at it from another angle, what is the least amount of bytes that could be on a disk and still be "run" as an OS?
(I'm sorry if this is vague. I just have no idea about this subject, so I can't be very specific. I pretend to know a fair amount about how computers work, but I'm utterly clueless about this subject.)
Well, the answer lives in books: Modern Operating Systems - Andrew S. Tanenbaum is a very good one. The cover illustration below.
The simplest yet complete operating system kernel, suitable for learning or just curiosity, is Minix.
Here you can browse the source code.
(source: cs.vu.nl)
Operating Systems is a huge topic, the best thing I can recommend you if you want to go really in depth on how a operating systems are designed and construced it's a good book:
Operating System Concepts
If you are truly curious I would direct you to Linux from Scratch as a good place to learn the complete ins and outs of an operating system and how all the pieces fit together. If that is more information than you are looking for then this Wikipedia article on operating systems might be a good place to start.
A PC knows to look at a specific sector of the disk for the startup instructions. These instructions will then tell the processor that on given processor interrupts, specific code is to be called. For example, on a periodic tick, call the scheduler code. When I get something from a device, call the device driver code.
Now how does an OS set up everything with the system? Well hardware's have API's also. They are written with the Systems programmer in mind.
I've seen a lot of bare-bones OS's and this is really the absolute core. There are many embedded home-grown OS's that that's all they do and nothing else.
Additional features, such as requiring applications to ask the operating system for memory, or requiring special privileges for certain actions, or even processes and threads themselves are really optional though implemented on most PC architectures.
The operating system is, simply, what empowers your software to manage the hardware. Clearly some OSes are more sophisticated than others.
At its very core, a computer starts executing at a fixed address, meaning that when the computer starts up, it sets the program counter to a pre-defined address, and just starts executing machine code.
In most computers, this "bootstrapping" process immediately initializes known peripherals (like, say, a disk drive). Once initialized, the bootstrap process will use some predefined sequence to leverage those peripherals. Using the disk driver again, the process might read code from the first sector of the hard drive, place it in a know space within RAM, and then jump to that address.
These predefined sequence (the start of the CPU, the loading of the disk) allows the programmers to star adding more and more code at the early parts of the CPU startup, which over time can, eventually, start up very sophisticated programs.
In the modern world, with sophisticated peripherals, advanced CPU architectures, and vast, vast resources (GBs or RAM, TB of Disk, and very fast CPUs), the operating system can support quite powerful abstractions for the developer (multiple processes, virtual memory, loadable drivers, etc.).
But for a simple system, with constrained resourced, you don't really need a whole lot for an "OS".
As a simple example, many small controller computers have very small "OS"es, and some may simply be considered a "monitor", offering little more than easy access to a serial port (or a terminal, or LCD display). Certainly, there's not a lot of needs for a large OS in these conditions.
But also consider something like a classic Forth system. Here, you have a system with an "OS", that gives you disk I/O, console I/O, memory management, plus the actual programming language as well as an assembler, and this fits in less than 8K of memory on an 8-Bit machine.
or the old days of CP/M with its BIOS and BDOS.
CP/M is a good example of where a simple OS works well as a abstraction layer to allow portable programs to run on a vast array of hardware, but even then the system took less than 8K of RAM to start up and run.
A far cry from the MBs of memory used by modern OSes. But, to be fair, we HAVE MBs of memory, and our lives are MUCH MUCH simpler (mostly), and far more functional, because of it.
Writing an OS is fun because it's interesting to make the HARDWARE print "Hello World" shoving data 1 byte at a time out some obscure I/O port, or stuffing it in to some magic memory address.
Get a x86 emulator and party down getting a boot sector to say your name. It's a giggly treat.
Basically... your computer can just run the disk because:
The BIOS includes that disk device in the boot order.
At boot, the BIOS scans all bootable devices in order, like the floppy drive, the harddrive, and the CD ROM. Each device accesses its media and checks a hard-coded location (typically a sector, on a disk or cd device) for a fingerprint that identifies the media, and lists the location to jump to on the disk (or media) where instructions start. The BIOS tells the device to move its head (or whatever) to the specified location on the media, and read a big chunk of instructions. The BIOS hands those instructions off to the CPU.
The CPU executes these instructions. In your case, these instructions are going to start up the Ubuntu OS. They could just as well be instructions to halt, or to add 10+20, etc.
Typically, an OS will start off by taking a large chunk of memory (again, directly from the CPU, since library commands like 'GlobalAlloc' etc aren't available as they're provided by the yet-to-be-loaded-OS) and starts creating structures for the OS itself.
An OS provides a bunch of 'features' for applications: memory management, file system, input/output, task scheduling, networking, graphics management, access to printers, and so on. That's what it's doing before you 'get control' : creating/starting all the services so later applications can run together, not stomp on each other's memory, and have a nice API to the OS provided services.
Each 'feature' provide by the OS is a large topic. An OS provides them all so applications just have to worry about calling the right OS library, and the OS manages situations like if two programs try to print at the same time.
For instance, without the OS, each application would have to deal with a situation where another program is trying to print, and 'do something' like print anyway, or cancel the other job, etc. Instead, only the OS has to deal with it, applications just say to the OS 'print this stuff' and the OS ensure one app prints, and all other apps just have to wait until the first one finishes or the user cancels it.
The least amount of bytes to be an OS doesn't really make sense, as an "OS" could imply many, or very few, features. If all you wanted was to execute a program from a CD, that would be very very few bytes. However, that's not an OS. An OS's job is to provide services (I've been calling them features) to allow lots of other programs to run, and to manage access to those services for the programs. That's hard, and the more shared resources you add (networks, and wifi, and CD burners, and joysticks, and iSight video, and dual monitors, etc, etc) the harder it gets.
http://en.wikipedia.org/wiki/Linux_startup_process you are probably looking for this.
http://en.wikipedia.org/wiki/Windows_NT_startup_process or this.
One of the most recent operating system projects I've seen that has a serious backing has been a MS Research project called Singularity, which is written entirely in C#.NET from scratch.
To get an idea how much work it takes, there are 2 core devs but they have up to a dozen interns at any given time, and it still took them two years before they could even get the OS to a point where it would bootup and display BMP images (it's how they use to do their presentations). It took much more work before they could even get to a point where there was a command line (like about 4yrs).
Basically, there are many arguments about what an OS actually is. If you got everyone agreed on what an OS specifically is (is it just the kernel? everything that runs in kernel mode? is the shell part of OS? is X part of OS? is Web browser a part of OS?), your question is answered! Otherwise, there's no specific answer to your question.
Oh, this is a fun one. I've done the whole thing at one point or another, and been there through a large part of the evolution.
In general, you start writing a new OS by starting small. The simplest thing is a bootstrap loader, which is a small chunk of code that pulls a chunk of code in and runs it. Once upon a time, with the Nova or PDP computers, you could enter the bootstrap loader through the front panel: you entered the instructions hex number by hex number. The boot loader than reads some medium into memory, and set the program counter to the start address of that code.
That chunk of code usualy loads something else, but it doesn't have to: you can write a program that's meant to run on the bare metal. That sort of program does something useful on its own.
A real operating system is bigger, and has more pieces. you need to load programs, put them in memory, and run them; you need to provide code to run the IO devices; as it gets bigger, you need to manage memory.
If you want to really learn how it works, find Doug Comer's Xinu books, and Andy Tannenbaum's newest operating system book on Minix.
You might want to get the book The Design and Implementation of the FreeBSD Operating system for a very detailed answer. You can get it from Amazon or this link to FreeBSD.org's site looks like the book as I remember it: link text
Try How Computers Boot Up, The Kernel Boot Process and other related articles from the same blog for a short overview of what a computer does when it boots.
What a computer does when its start is heavily dependent (maybe obvious?) on the CPU design and other "low-level stuff"; therefore it's kind of difficult to anticipate what your computer does when it boots.
I can't believe this hasn't been mentioned... but a classic book for an overview of operating system design is Operating Systems - Design and Implementation written by Andrew S Tanenbaum, the creator of MINIX. A lot of the examples in the book are geared directly towards MINIX as well.
If you would like to learn a bit more, OS Dev is a great place to start. Especially the wiki. This site is full of information as well as developers who write personal operating systems for a small project/hobby. It's a great learning resource too, as there are many people in the same boat as you on OSDev who want to learn what goes into an OS. You might end up trying it yourself eventually, too!
the operating system (OS) is the layer of software that controls the hardware. The simpler the hardware, the simpler the OS, and vice-versa ;-)
if the early days of microcomputers, you could fit the OS into a 16K ROM and hard-wire the motherboard to start executing machine code instructions at the start of the ROM address space. This 'bootstrap' process would then load the code for the drivers for the other devices like the keyboard, monitor, floppy drive, etc., and within a few seconds your machine would be booted and ready for use.
Nowadays... same principle, but a lot more and more complex hardware ;-)
Well you have something linking the startup of the chip to a "bios", then to a OS, that is usually a very complicated task done by a lot of services of code.
If you REALY want to know more about this i would recomend reading a book... about microcontrllers, especially one where you create a small OS in c for a 8051 or the like.. or learn some x86 assembly and create a very small "bootloader OS".
You might want to check out this question.
An OS is a program, just like any other application you write. The main purpose of this program is that it allows you to run other programs. Modern OSes take advantage of modern hardware to ensure that programs do not clash with one another.
If you are interested in writing your own OS, check out my own question here:
How to get started in operating system development
You ask how few bytes could you put on disk and still run as an OS? The answer depends on what you expect of your OS, but the smallest useful OS that I know of fits in 1.7 Megabytes. It is Tom's Root Boot disk and it is a very nice if small OS with "rescue" applications that fits on one floppy disk. Back in the days when every machine had a floppy drive and not every machine had a CD-ROM drive, I used to use it frequently.
My take on it is that it is like your own life. AT first, you know very little - just enough to get along. This is similar to what the BIOS provides - it knows enough to look for a disk drive and read information off of it. Then you learn a little bit more when you go to elementary school. This is like the boot sector being read into memory and being given control. Then you go to high school, which is like the OS kernel loading. Then you go to college (drivers and other applications.) Of course, this is the point at which you are liable to CRASH. HE HE.
Bottom line is that layers of more and more capability are slowly loaded on. There's nothing magic about an OS.
Reading through here will give you an idea of what it took to create Linux
https://netfiles.uiuc.edu/rhasan/linux/
Another really small operating system that fits on one disk is QNX (when I last looked at it a long time ago, the whole OS, with GUI interface, web browser, disk access and a built in web server, fit on one floppy drive).
I haven't heard too much about it since then, but it is a real time OS so it is designed to be very fast.
Actually, some people visit a 4-year college to get a rough idea on this..
At its core, OS is extremely simple. Here's the beginner's guide to WHAT successful OS are made to do:
1. Manage CPU using scheduler which decides which process (program's running instance) to be scheduled.
2. Manage memory to decide which all processes use it for storing instruction(code) and data(variables).
3. Manage I/O interfaces such as disk drives, alarms, keyboard, mouse.
Now, above 3 requirements give rise to need for processes to communicate(and not fight!), to interact with outside world, help applications to do what they want to do.
To dig deeper into HOW it does that, read Dinosaur book :)
So, you can make OS as small as you want to as long as you manage to handle all hardware resources.
When you bootup, BIOS tells CPU to start reading bootloader(which loads first function of OS which resides at fixed address in memory--something like main() of small C program). Then this creates functions and processes and threads and starts the big bang!
Firstly, reading reading and reading about, what is OS; then what are the uses/ Types/ nature / objective/ needs/ of the different OS's.
Some of the links are as follows; newbie will enjoy these links:
Modern OS - this gives Idea about general OS.
Start of OS - this gives basics of what it really takes to MAKE OS, how we can make it and how one can modify a present open source code of OS by himself.
Wiki OS - Gives idea about the different Os's used in different fields and uses of it(Objects / features of OS.)
Let's see in general what OS contains (Not the sophisticatedLinux or Windows)
OS need a CPU and to dump a code in it you need a bootloader.
OS must be have the objectives to fullfill and those objectives mustbe defined in a wrapper which is called Kernel
Inside you could have scheduling time and ISR's (Depends on the objective and OS you need to make)
OS development is complicated. There are some websites like osdev or lowlevel.eu (german) dedicated to the topic. There are also some books, that others have mentioned already.
I can't help but also reference the "Write your own operating system" video series on youtube, as I'm the one who made it :-)
See https://www.youtube.com/playlist?list=PLHh55M_Kq4OApWScZyPl5HhgsTJS9MZ6M