how does keyboard input reach the right virtual terminal in GUI - operating-system

Lets say we have an 8 core system running linux and you are using GUI desktop and have 10-20 terminal open.
When you type something, the user input appears on the correct terminal. How does that happen. For example the keyboard interrupt can arrive on any of the cpu, how is it routed to the correct process is my question (given that at a time 10 processes are waiting for user input)
This is what I know:
Keyboard driver will have an interrupt handler that reads the input and copies it to a buffer which might be processed by some high priority work-queue. (not necessary but that is what I feel will happen)
This buffer has to be copied into buffer of the file descriptor for stdin of the currently active shell.
What I don't know
How does the work-queue work function determine which process is running the currently active shell.

It just does know. One of all of the process is marked as the current for console I/O. You switch to another, that other gets marked as the current. I don't know the details of the implementation, but that's the idea.

The work queue function does not determine which process is running - this is done at a much higher level. The keyboard device is exported by the kernel through a device file in /dev/input/ (on my system it is /dev/input/event3 - you can look at /dev/input/by-id to see which one corresponds to your keyboard). This device file is opened by the X server in order to receive the events (look for the device file in /var/log/Xorg.0.log to see where this happens). The X server thus receives all the keyboard events and dispatches them to the right client itself. Knowing which window has the focus, it can put the corresponding input event into the client queue queue and send a signal to the corresponding process, which is waken up and can process the event.
See http://en.wikipedia.org/wiki/Evdev and related links for more information.

Related

Keil debugger changes the hardware state of STM32H7 regarding FIFOs

I encountered the following issue while using Keil MDK 5 for STM32H743.
I had a communication problem with my SPI code and after a while I found out that it was due to the Periodic Windows Update.
When it is activated, it seems that the debugger is reading regularly the SPI data register, which reads the FIFO (so changes the state of the FIFO). Consequently when the software reads the FIFO, some bytes have been "lost" (or consumed by the debugger).
Is it an expected behaviour ? Do you know if it is due to Keil or to the STM32 ?
I don't fully understand how an access from the debugger to a register is working: I guess there is a read command sent over SWD but then, internally does the access to memory go through AHB / APB like for code executing on the CPU ?
Any registers that modify behaviour by being read (such as clearing status bits) can be problematic when debugging and the registers are shown in the debug window.
The best bet is to only look at the registers when you stop (close the DR window for the peripheral), and always be aware that you may clear status bits etc.
It is the way the processor works and nothing to do with the debugger.
It is a very common debug issue with serial comms etc.
if you have DR display in your watch window (or any other similar windows on the debugger screen) and you step through the code every time you step (or generally break) the data is read.
That is the only possible reason.

Can I open and run from multiple command line prompts in the same directory?

I want to open two command line prompts (I am using CMDer) from the same directory and run different commands at the same time.
Would those two commands interrupt each other?
One is for compiling a web application I am building (takes like 7 minutes to compile), and the other is to see the history of the commands I ran (this one should be done quickly).
Thank you!
Assuming that CMDer does nothing else than to issue the same commands to the operating system as a standard cmd.exe console would do, then the answer is a clear "Yes, they do interfere, but it depends" :D
Break down:
The first part "opening multiple consoles" is certainly possible. You can open up N console windows and in each of them switch to the same directory without any problems (Except maybe RAM restrictions).
The second part "run commands which do or do not interfere" is the tricky part. If your idea is that a console window presents you something like an isolated environment where you can do things as you like and if you close the window everything is back to normal as if you never ever touched anything (think of a virtual machine snapshot which is lost/reverted when closing the VM) - then the answer is: This is not the case. There will be cross console effects observable.
Think about deleting a file in one console window and then opening this file in a second console window: It would not be very intuitive if the file would not have been vanished in the second console window as well.
However, sometimes there are delays until changes to the file system are visible to another console window. It could be, that you delete the file in one console and make a dir where the file is sitting in another console and still see that file in the listing. But if you try to access it, the operating system will certainly quit with an error message of the kind "File not found".
Generally you should consider a console window to be a "View" on your system. If you do something in one window, the effect will be present in the other, because you changed the underlying system which exists only once (the system is the "Model" - as in "Model-View-Controller Design Pattern" you may have heard of).
An exception to this might be changes to the environment variables. These are copied from the current state when a console window is started. And if you change the value of such a variable, the other console windows will stay unaffected.
So, in your scenario, if you let a build/compile operation run and during this process some files on your file system are created, read (locked), altered or deleted then this would be a possible conflicting situation if the other console window tries to access the same files. It will be a so called "race condition", that is, a non-deterministic process, which state of a file will be actual to the second console window (or both, if the second one also changes files which the first one wants to work with).
If there is no interference on a file level (reading the same files is allowed, writing to the same file is not), then there should be no problem of letting both tasks run at the same time.
However, on a very detailed view, both processes would interfere in that they need the same limited but vastly available CPU and RAM resources of your system. This should not pose any problems with the todays PC computing power, considering features like X separate cores, 16GB of RAM, Terabytes of hard drive storage or fast SSDs, and so on.
Unless there is a very demanding, highly parallelizable, high priority task to be considered, which eats up 98% CPU time, for example. Then there might be a considerable slow down impact on other processes.
Normally, the operating system's scheduler does a good job on giving each user-process enough CPU time to finish as quickly as possible, while still presenting a responsive mouse cursor, playing some music in the background, allowing a Chrome running with more than 2 tabs ;) and uploading the newest telemetry data to some servers on the internet, all at the same time.
There are techniques which make it possible that a file is available as certain snapshots to a given timestamp. The key word would be "Shadow Copy" under Windows. Without going into details, this technique allows for example defragmenting a file while it is being edited in some application or a backup could copy a (large) file while a delete operation is run at the same file. The operating system ensures that the access time is considered when a process requests access to a file. So the OS could let the backup finish first, until it schedules the delete operation to run, since this was started after the backup (in this example) or could do even more sophisticated things to present a synchronized file system state, even if it is actually changing at the moment.

Is it theoretically possible to run software parallel to the OS?

Could you run software in conjunction with the OS? although it might not be very practical, I am curious to know if there are any limitations that deem this impossible without regards to performance, ... etc. The way in which I could visualize the system functioning would be in the same manner in which the OS gives the illusion that multiple programs are executed at the same time in order to multitask when in reality only one program operates at a time, but in this case, it is not just the OS and the processes executing on the processor, but a program and a OS at the same time. The processor architecture which I would based this design on would be the x86.
At its core, a multitasking OS is nothing more than a task switcher. There are two kinds of multitasking which usually exist in parallel - co-operative (like windows 3.1) where the program is responsible for sharing resources (either "I'm waiting for this so do something else in the meantime" or "Give someone else a chance for a while") and preemptive where the OS steps in and says "You've had enough time, now give someone else a chance."
Even the most primitive CPUs have interrupts. Something happens (a key is pressed or a timer goes off) and a function is called to do something before returning to what it was doing. The return from interrupt command restores the registers and returns to the exact instruction that was about to be executed when the interrupt happened.
However, it does not have to return to the same place. When entering the interrupt routine, the return address and registers are on the stack. Take them off and save them somewhere referenced by the current task. Now take those you saved earlier from a different task and put those on the stack (return address last). Now returning from the interrupt will continue executing the task from earlier. You might also want to set a timer before you leave to set a time limit before switching tasks again.
That's the simplest form of task-switching as you describe.

Is the change between kernel/user mode done by hardware or software?

I was just wondering whether the switch between the kernel mode and the user mode in an operating system is done by the hardware or the os itself.
I understand that when a user process wants to get into kernel mode it can make a system call and execute some kernel code. When the system call is made, the process goes into the kernel mode and now all memory becomes accessible, etc. In order for this to happen, I would assume that the interrupt handler needs to switch or alter the page table. Is this true? If not, how does the CPU know, that it is running in the kernel mode and does not need to page fault when accessing restricted (unaccessible to the user process) memory?
Thanks!
The last answer is actually not true....
Changing to kernel mode doesn't go through 'Real mode'. Actually after finishing the boot process, the computer never goes back to real mode.
In normal x86 systems, changing to kernel mode involves calling 'sysenter' (after setting parameters in some registers), which causes jumping a predefined address (saved in the MISR register of the CPU), that was set when the computer booted, because it can be done only from kernel mode (it is a 'privileged' command).
So it basically involves executing a software command, that the hardware responds to, by the way it was set, when it was in kernel mode
This is kind of a broad question - each hardware platform is going to do things slightly differently, but I think the basic answer is that it's done w/ software that leverages hardware facilities for memory protection, etc.
When a user process wants to do a system call, it executes a special CPU instruction, and the CPU switches from virtual mode (for user processes, has page tables specific to processes) to real mode (for the kernel) and jumps to the OS syscall handler. The kernel can then do what it likes.
CPU support for this is required. The CPU keeps track of which mode it is in, where the page tables are located, jumping the instruction pointer, etc. It is triggered by the user software doing the syscall, and is dependent on the kernel providing support for whatever it is trying to do. As with all computation, it's always both hardware and software. I cannot be done solely with software however, because then there would be no way to prevent a process making a syscall from abusing the privelages it gains, e.g. it could start reading /etc/shadow.
Modern x86 computers have a special instruction just for doing system calls. Earlier x86 processors, and some current RISC ones, have an instruction to trigger an interrupt. Older architecures had other ways of switching control.

Is cursor necessary in a multitasked system?

Does a multitasking system involve a mouse cursor to make the user able to interact with more than one task/process at a time?
You don't need a mouse to have a multitasking system. The Wikipedia article on multitasking has some history of multitasking systems; they're a lot older than window environments and mice. The first multitasking systems ran batch jobs: you submit a task (by loading up a deck of punched cards, for example) and wait for it to finish; there could be multiple tasks in progress at any given time.
Later systems had user interaction through a command line; for example, in a purely textual unix user interface, you can use job control to run commands in the background, and control which program you get to interact with.
Even in a typical window environment, the application that has the focus (i.e. the application that you type into) isn't the only one that can get CPU time. A window environment on a multitasking operating system lets you switch to another window while an application is computing something. Additionally pretty much any multitasking system has a bunch of tasks ready in the background, only running when some event happens (hardware event, packet received over the network, timer, …). So even when there are windows and a mouse, there's no particular relationship between them and multitasking.
There's nothing about a multi-tasking system that requires any kind of involvement by the user.
To tackle the banal answer, my system, which is a Windows 7 64-bit system, could start up Notepad and seem to be single-process only in the sense that I'm only running one program, but obviously that's far from the truth.
In the other end of the scale you could have a system where the concept of a mouse cursor wouldn't make sense at all, let alone a display. For instance, a mainframe would fit this end of the scale, where the system doesn't really have a user-interface or a mouse, but is still very much so a multi-user and thus a multi-process system.
I guess my answer is more like this: What is actually your question?