Understanding higher level call to systemcalls - operating-system

I am going through the book by Galvin on OS . There is a section at the end of chapter 2 where the author writes about "adding a system call " to the kernel.
He describes how using asmlinkage we can create a file containing a function and make it qualify as a system call . But in the next part about how to call the system call he writes the following :
" Unfortunately, these are low-level operations that cannot be performed using C language statements and instead require assembly instructions. Fortunately, Linux provides macros for instantiating wrapper functions that contain the appropriate assembly instructions. For instance, the following C program uses the _syscallO() macro to invoke the newly defined system call:
Basically , I want to understand how syscall() function generally works . Now , what I understand by Macros is a system for text substitution .
(Please correct me If I am wrong)
How does a macro call an assembly language instruction ?
Is it so that syscallO() when compiled is translated into the address(op code) of the instruction to execute a trap ?(But this somehow doesn't fit with concept or definition of macros that I have )
What exactly are the wrapper functions that are contained inside and are they also written in assembly language ?
Suppose , I want to create a function of my own which performs the system call then what are the things that I need to do . Do , I need to compile it to generate the machine code for performing Trap instructions ?

Man, you have to pay $156 dollars to by the thing, then you actually have to read it. You could probably get an VMS Internals and Data Structures book for under $30.
That said, let me try to translate that gibberish into English.
System calls do not use the same kind of linkage (i.e. method of passing parameters and calling functions) that other functions use.
Rather than executing a call instruction of some kind, to execute a system service, you trigger an exception (which in Intel is bizarrely called an interrupt).
The CPU expects the operating system to create a DISPATCH TABLE and store its location and size in a special hardware register(s). The dispatch table is an array of pointers to handlers for exceptions and interrupts.
Exceptions and interrupts have numbers so, when exception or interrupt number #1 occurs, the CPU invokes the 2d exception handler (not #0, but #1) in the dispatch table in kernel mode.
What exactly are the wrapper functions that are contained inside and are they also written in assembly language ?
The operating system devotes usually one (but sometimes more) exceptions to system services. You need to do some thing like this in assembly language to invoke a system service:
INT $80 ; Explicitly trigger exception 80h
Because you have to execute a specific instruction, this has to be one in assembly language. Maybe your C compiler can do assembly language in line to call system service like that. But even if it could, it would be a royal PITA to have to do it each time you wanted to call a system service.
Plus I have not filled in all the details here (only the actual call to the system service). Normally, when you call functions in C (or whatever), the arguments are pushed on the program stack. Because the stack usually changes when you enter kernel mode, arguments to system calls need to be stored in registers.
PLUS you need to identify what system service you want to execute. Usually, system services have numbers. The number of the system service is loaded into the first register (e.g., R0 or AX).
The full process when you need to invoke a system service is:
Save the registers you are going to overwrite on the stack.
Load the arguments you want to pass to the system service into hardware registers.
Load the number of the system service into the lowest register.
Trigger the exception to enter kernel mode.
Unload the arguments returned by the system service from registers
Possibly do some error checking
Restore the registers you saved before.
Instead of doing this each time you call a system service, operating systems provide wrapper functions for high level languages to use. You call the wrapper as you would normally call a function. The wrapper (in assembly language) does the steps above for you.
Because these wrappers are pretty much the same (usually the only difference is the result of different numbers of arguments), wrappers can be created using macros. Some assemblers have powerful macro facilities that allow a single macro to define all wrappers, even with different numbers of arguments.
Linux provides multiple _syscall C macros that create wrappers. There is one for each number of arguments. Note that these macros are just for operating system developers. Once the wrapper is there, everyone can use it.
How does a macro call an assembly language instruction ?
These _syscall macros have to generate in line assembly code.
Finally, note that these wrappers do not define the actual system service. That has to be set up in the dispatch table and the system service exception handler.

Related

Should temporary registers be saved before issuing an environment call?

In the following RISC-V assembly code:
...
#Using some temporary (t) registers
...
addi a7,zero,1 #Printint system call code
addi a0,zero,100
ecall
...
Should any temporary (t) registers be saved to the stack before using ecall? When ecall is used, an exception occurs, kernel mode is on and code is executed from an exception handler. Some information is saved when exceptions occur, such as EPC and CAUSE , but what about the temporary registers? Environment calls are considered not to be like procedures for safety reasons, but they look like it. Do the procedure calling conventions still apply in this case?
You are correct that the hardware captures some information, such as the epc (if it didn't the interrupted pc would be lost during transfer of control to the exception handler).
However that's all the hardware does — the rest is up to software.  Here, RARS is providing the exception handler for ecall.
RARS documentation states (from the help section on syscalls, which is what these are called on MIPS (and RARS came from MARS)):
Register contents are not affected by a system call, except for result registers as specified in the table below.
And below this quote in the help is ecall function code table labeled "Table of Available Services", in which 1 is PrintInt.
Should any temporary (t) registers be saved to the stack before using ecall?
No, it is not necessary, the $t registers will be unaffected by an ecall.
This also means that we can add ecalls to do printf-style debugging without concern for preserving the $t registers, which is nice.  However, keeping that in mind, we might generally avoid $a0 and $a7 for our own variables, since putting in an ecall for debugging will need those registers.
In addition, you can write your own exception handler, and it would not have to follow any particular conventions for parameter passing or even register preservation.

Structured Text : Function and Function block (Pros and Cons)

I am coming from computer science background and used to traditional IT programming. I have relatively little experience with structured text. In my current project I am extensively using many function block. I am aware that this involves some memory issues and so on. Could anyone come up and give me some advantages and disadvantages of each of them. Should I avoid them and write everything in a single program ? Please practical hints should be welcome as I am about to release my application.
System : Codesys
I also come from the PC programming world, and there are certain object tricks I miss when programming in Codesys. The function blocks go a long way towards object thinking, though. They're too easy to peek into from the outside, so some discipline from the user is necessary, to encapsulate the functionality or objects.
You shouldn't write a single piece of program to handle all functionality, but instead use the Codesys facilities to divide the program into objects where possible. This also means to identify which objects are alike and can be programmed as function blocks. The instance of a function block is created in memory when the program is downloaded, e.g. it is always visible for monitoring.
I normally use POU's to divide the project into larger parts, e.g. Machine1(prg), Machine2(prg) and Machine3(prg). If each machine has one or more motors of similar type, this is where the function blocks come in, so that I can program one motor object called FB_Motor, and reuse it for the necessary motor instances inside the 3 machine programs. Each instance can then hold its own internal states, timers, input output, or whatever a motor needs.
The structure for the above example is now:
MAIN, calls
Machine1(prg), calls
fbMotor1 (implements FB_Motor, local for Machine1)
fbMotor2 (implements FB_Motor, local for Machine1)
Machine2(prg), calls
fbMotor1 (implements FB_Motor, local for Machine2)
Machine3(prg), calls
fbMotor1 (implements FB_Motor, local for Machine3)
fbMotor2 (implements FB_Motor, local for Machine3)
fbMotor3 (implements FB_Motor, local for Machine3)
The functions are another matter. Their data exist on the stack when the function is called, and when the function has returned its value, the data is released. There are lots of built in functions, e.g. BOOL_TO_INT(), SQR(n) and so on.
I normally use functions for lookup and conversion functions. And they can be called from all around the program.
The clarity, robustness and maintainability are everything in PLC world. Function blocks help you to archieve that if the stucture is kept relatively flat (so one should avoid functionblock inside functionblock insede function block, compared of true object and their heritage).
Also the graphical languages are there for reason they visualise the complex systems in easy to digest form in a way that the maintaining personnel in the future have easier life to follow what is wrong with the PLC program and the part of the factory.
What comes to ST it is advance to remember that it is based on strongly typed Wirthian languages (ADA, Pascal etc.). Also what is often more important than memory usage is the constant cycle time of the program (since real time system). The another cup of the tea is the electrical layer of the control system, plus the physical layer and all the relations on that layer that can backflash somewhere else in your program if not taken account.

VHDL Bus Functional Modelling - Can't put groups of procedures into a package to clean up the code

I want to organize a working bus functional model and push commonly used procedures (which look like CPU subroutines) out into a package and get them out of the main cpu model, but I'm stuck.
The procedures don't have access to the hardware bits when they're pushed out in a package.
In Verilog, I would put commonly used procedures out into an include file and link them into the CPU model as required for a given test suite.
More details:
I have a working bus functional model of a CPU, for simulation test benching.
At the "user interface" level I have a process called "main" running inside the CPU model which calls my predefined "instruction set" like this:
cpu_read(address, read_result);
cpu_write(address, write_data);
etc.
I bundle groups of those calls up into higher level procedures like
configure_communication_bus;
clear_all_packet_counters;
etc.
At the next layer these generic functions call a more hardware specific version which knows the interface timing for the design,
and those procedures then use an input record and output record to connect to the hardware module ports and waggle the cpu bus signals as required.
cpu_read calls hardware_cpu_read(cpu_input_record, cpu_output_record, address);
Something like this:
procedure cpu_read (address : in std_logic_vector(15 downto 0);
read_result : out std_logic_vector(31 downto 0));
begin
hardware_cpu_read(cpu_input_record, cpu_output_record, address, read_result);
end procedure;
The cpu_input_record and cpu_output_record are declared as signals of type nnn_record in the cpu model vhdl file.
So this is all working, but every single one of these procedures is all stored in the cpu VHDL module file, and all in the procedure declaration section so that they are all in the same scope.
If I share the model with team members they will need to add their own testing subroutines, and those also are all in the same location in the file, as well, their simulation test code has to go into the "main" process along with mine.
I'd rather link in various tests from outside the model, and only keep model specific procedures in the model file..
Ironically I can push the lowest level hardware procedure out to a package, and call those procedures from within the "main" process, but the higher level processes can't be put out into that package or any other packages because they don't have access to the cpu_read_record and cpu_write_record.
I feel like there must be a simple way to clean up this code and make it modular, and I'm just missing something obvious.
I don't really think making a command interpreter and loading my test code into a behavioral ROM is the right way to go by the way. Nor is fighting with the simulator interface to connect up a C program, but I may break down and try this..
Quick sketch of an answer (to the question I think you are asking! :-) though I may be off-beam...
To move the BFM subprograms into a reusable package, they need to be independent of the execution scope - that usually means a long parameter list for each of them. So using them in a testbench quickly gets tedious compared with the parameterless (or parameter-lite) versions you have now..
The usual workaround is to implement the BFM in a package, with long parameter lists.
Then write parameter-lite local equivalents (wrappers) in the execution scope, which simply call the package versions supplying all the parameters explicitly.
This is just boilerplate - not pretty but it does allow you to move the BFM into a package. These wrappers can be local to the testbench, to a process within it, or even to a subprogram within that process.
(The parameter types can be records for tidiness : these are probably declared in a third package, shared between BFM. TB, and synthesisable device under test...)
Thanks to overloading, there is no ambiguity between the local and BFM package versions, so the actual testbench remains as simple as possible.
Example wrapper function :
function cpu_read(address : unsigned) return slv_32 is
begin
return BFM_pack.cpu_read (
address => address,
rd_data_bus => tb_rd_data_bus,
wait => tb_wait_signal,
oe => tb_mem_oe,
-- ditto for all the signals constants variables it needs from the tb_ scope
);
end cpu_read;
Currently your test procedures require two extra signals on them, cpu_input_record and cpu_output_record. This is not so bad. It is not uncommon to just have these on all procedures that interact with the cpu and be done with it. So use hardware_cpu_read and not cpu_read. Add cpu_input_record, cpu_output_record to your configure_communication_bus and clear_all_packet_counters procedures and be done. Perhaps choose shorter names.
I do a similar approach, except I use only one record with resolved elements. To make this work, you need to initialize the record so that all elements are non-driving (ie: 'Z' for std_logic). To make this more flexible, I have created resolution functions for integer, time, and real. However, this only saves you one signal. Not a real huge win. Perhaps half way to where you think you want to be. But it is more work than what you are doing.
For VHDL-201X, we are working on syntax to allow parameters/ports automatically map to a identically named signal. This will get you to where you want to be with any of the approaches (yours, mine, or Brian's without the extra wrapper subprogram). It is posted here: http://www.eda.org/twiki/bin/view.cgi/P1076/ImplicitConnections. Given this, I would add the two records to your procedures and call it good enough for now.
Once you get by this problem, you seem to also be asking is how do I write separate tests using the same testbench. For this I use multiple architectures - I like to think of these as a Factory Class for concurrent code. To make this feasible, I separate the stimulus generation code from the rest of the testbench (typically: netlist connections and clock). My presentation, "VHDL Testbench Techniques that Leapfrog SystemVerilog", has an overview of this architecture along with a number of other goodies. It is available at: http://www.synthworks.com/papers/index.htm
You're definitely on the right track, in fact I have a variant like this (what you describe).
The catch is, now I build up a whole subroutine using the "parameter light" procedures, and those are what I want to put in a package to share and reuse. The problem is that any procedure pushed out to a package can't call to the parameter light procedures in the main vhdl file..
So what happens is we have one main vhdl file with all the common CPU hardware setup routines, and every designer's test code all in the same vhdl file..
Long story short, putting our test subroutines into separate files is really what I was hoping for..

Why are there ioctl calls in socket.c?

Trying to understand why there are ioctl calls in socket.c ? I can see a modified kernel that I am using, it has some ioctl calls which load in the required modules when the calls are made.
I was wondering why these calls ended up in socket.c ? Isn't socket kind of not-a-device and ioctls are primarily used for device.
Talking about 2.6.32.0 heavily modified kernel here.
ioctl suffers from its historic name. While originally developed to perform i/o controls on devices, it has a generic enough construct that it may be used for arbitrary service requests to the kernel in context of a file descriptor. A file descriptor is an opaque value (just an int) provided by the kernel that can be associated with anything.
Now if you treat a file descriptor and think of things as files, which most *nix constructs do, open/read/write/close isn't enough. What if you want to label a file (rename)? what if you want to wait for a file to become available (ioctl)? what if you want to terminate everything if a file closes (termios)? all the "meta" operations that don't make sense in the core read/write context are lumped under ioctls; fctls; etc. unless they are so frequently used that they deserve their own system call (e.g. flock(2) functionality in BSD4.2)

System call or function call - performance-wise

In Linux, when you can choose between a system call or a function call to do a task, which option is the better one due to a better performance?
We should note that in most of the cases we do not directly use system call. We use the interface provided by glibc.
http://www.kernel.org/doc/man-pages/online/pages/man2/syscalls.2.html
http://www.gnu.org/software/libc/manual/html_node/System-Calls.html
Now in cases like File Mangement/IPC/ process management etc which are the core resource management activities of the Operating System the only option is system call and not library functions.
In these cases, typically we use Library function which works as a wrapper over a system call. That is say for reading a file, we have many library functions like
fgetc/fgets/fscanf/fread - all should invoke read system call.
So shall we use read system call? or the other library functions?
This should depend on the particular application.If we are using read, then we again need to change the code to run this, on some other operating system where read is not available.
We are losing some flexibilty. It may be useful when we are sure of the platform and we can do some optimisations by using read only or may be the application must use only file descriptors and not file pointer etc.
Now in cases where we need to consider only say user level operations and invoke
no service from operating system , like say copying a string.(strcpy).
In this case definitely we shall not use any system call unnecessarily, if at
all something is there, since it should be an extra overhead due to operating
system intervention, which is not needed in this case.
So I feel choosing between a system call and a library function only occurs for cases where we have a library function built on top of a system call.
(like adding to examples above we can have say malloc which calls system call brk).
Here the choice will depend on the particular type of software, the platform on which it should run, the precise non functional requirements like speed (Though you cannot say with certainty that your code will run faster if you are using brk instead of malloc), portability etc.