what is the meaning of the content of /proc/ioports - linux-device-driver

I got a strange text by 'catting' the /proc/ioports file of my PC linux,
0000-001f : dma1
0020-003f : pic1
0040-005f : timer
0060-006f : keyboard
0070-007f : rtc0
...
What I dont understand is the anterior part of each entry, look at the first entry for example, does it mean 31(0x1f in hex) ports occupied by dma1? If true, I cannot imagine how many ports on x86 processor, since I know there are only several 8bit ports on a 8bit-MCU.
Can anyone detail the meaning of the number, and the io ports of x86 processor?

It's the list of I/O ports regions that have been claimed by kernel drivers using the request_region kernel function. So it's not the complete list of I/O ports or devices available, only the ones that have been claimed by various kernel drivers. The request_region mechanism allows the kernel to prevent multiple drivers from talking to the same device.

/proc/ioports lists the ranges and names of ioports provided by device drivers in the Linux kernel of ports of the port ranges claimed and handled by said drivers.
As an example, io ports 0070 - 007f are claimed by the RTC Linux kernel driver.
One would assume that said port ranges claimed by the driver correspond to the appropriate port ranges offered by the respective hardware but you should note that there is actually no mechanism that ensures that indeed they are.

Related

How do I view raw memory addreses of MODBUS TCP/IP holding registers in CODESYS

For a work project, I have to read a bunch of holding registers from an IFM CR1203 PLC that is programmed using CODESYS 3.5.
The PLC will be running a slave instance and the device reading the holding registers will be a PC running a custom application programmed in Javascript to be a client. I have already programmed MODBUS TCP/IP functions for the custom application that is tested and works (For a previous project I had to do the same for a different PLC programmed using a different platform).
My current issue is that I need the raw memory address of the first holding register to do this, but I can't find it on the CODESYS IDE. CODESYS uses an addressing system that makes it easy for different CODESYS-based devices to communicate. Here is a link that explains how it works: CODESYS MODBUS register location guide
The only thing that looks like it can work is from the link above:
<memory position> : <number> ( .<number> )* // Depends on the target system
But I don't fully understand what all that means.
I also can't find any documentation on the PLC or CODESYS that explains this topic in enough detail. Here is a snippet of dummy code used for testing that shows the CODESYS addresses:
Can someone please explain to me how I can convert the value %IW0 to a raw memory address, for example, 0xFFFF?
I use Machine Expert (Codesys 3.5.16) and in their documentation says:
The I/Os are mapped to Modbus registers from the master perspective as follows:
%IWs are mapped from register 0 to n-1 and are R/W (n = Holding register quantity, each %IW register is 2 bytes).
%QWs are mapped from register n to n+m -1 and are read only (m = Input registers quantity, each %QW register is 2 bytes).
So in your example they should be address 0 and 1.

How to Decide Port I/O Address of ACPI Timer

According to OSDev, to locate the Port I/O address of ACPI timer, we first open FADT table and check entries PM Timer Block Length and PM Timer Block Address. In my computer, PM Timer Block Address gives address 0x408 and it works correctly.
However, in the implementation of OVMF, the I/O address of ACPI timer is calculated as PMBA + 0x8. I search the internet and found no information about this way of calculation.
I'm wondering are both methods to decide ACPI timer address correct? If both are correct, where can I find definitions of information about the second way of calculation?
Firmware (e.g. OVMF) uses chipset specific methods to determine the IO port of the ACPI timer; and then constructs the FADT and fills it in so that an OS doesn't need to be chipset specific.
If you don't want to use FADT, then you can write a chipset specific driver for each chipset. For some cases (open source emulators) this may be relatively easy, and for some cases (proprietary and undocumented real hardware) this will be almost impossible.

What's the difference between endpoint and socket?

Almost every definition of socket that I've seen, relates it very closely to the term endpoint:
wikipedia:
A network socket is an internal endpoint for sending or receiving data
at a single node in a computer network. Concretely, it is a
representation of this endpoint in networking software
This answer:
a socket is an endpoint in a (bidirectional) communication
Oracle's definition:
A socket is one endpoint of a two-way communication link between two
programs running on the network
Even stackoverflow's definition of the tag 'sockets' is:
An endpoint of a bidirectional inter-process communication flow
This other answer goes a bit further:
A TCP socket is an endpoint instance
Although I don't understand what "instance" means in this case. If an endpoint is, according to this answer, a URL, I don't see how that can be instantiated.
"Endpoint" is a general term, including pipes, interfaces, nodes and such, while "socket" is a specific term in networking.
IMHO - logically (emphasis added) "socket" and "endpoint" are same, because they both are concatenation of an Internet Address with a TCP port. Strictly technically speaking in core-networking, there is nothing like "endpoint", there is only "socket". Go on, read more below...
As #Zac67 highligted, "socket" is a very specific term in networking - if you read TCP RFC (https://www.rfc-editor.org/rfc/rfc793) then you won't find even a single reference of "endpoint", it only talks about "socket". But when you come out of RFC world, you will hear a lot about "endpoint".
Now, they both talk about combination of IP address and a TCP port, but you can't say someone that "please give me socket of your application", you will say "please give me endpoint of your application". So, IMHO the way someone can understand difference between Socket and Endpoint is - even though both refer to combination of IP address and TCP port, but you use term "socket" when you are talking in context of computer processes or in context of OS, otherwise when talking with someone in general you will use "endpoint".
I am a guy coming from embedded systems world and low level things,
Endpoint is a hardware buffer constructed at the far end of your machine, what does that mean?
YourMachine <---------------> Device
[Socket] ----------------> [Endpoint]
[Endpoint] <---------------- [Socket]
Both sockets and endpoints are endpoints but socket is an endpoint that resides on the sender which here your machine[Socket is a word used to distinguish between sender and receiver]
OK, now that we know it is a buffer, what is the relation between buffers and networking?
Windows
When you create a socket on Windows, the OS returns a handle to that socket, in fact socket is actually a kernel object, so in Windows when you create a kernel object the returned value is a handle which is used to access that object, usually handles are void* which is then casted into numerical value that Windows can understand, now that you have access to socket kernel object, all IO operations are handled in the OS kernel and since you want to communicate with external device then you have to reach kernel first and the socket is exactly doing this, in other words, socket creates a socket object in the kernel = creates an endpoint in the kernel = creates a buffer in the kernel, that buffer is used to stream data through wires later on using OS HAL(Hardware abstraction layer) and you can talk to other devices and you are happy
Now, if the other device doesn't have communication buffer = endpoint, then you can't communicate with it, even if you open a socket on your end, it has to be two way data communication = Send and Receive
Another example of accessing IO peripheral is accessing RAM (Main memory), two ways of accessing RAM, either you access process stack or access process heap, the stack is not a kernel object in fact you can access stack directly without reaching OS kernel, simply by subtracting a value from RSP(Stack pointer register), example:
; This example demonstrates how to allocate 32 contiguous bytes from stack on Windows OS
; Intel syntax
StackAllocate proc
sub rsp, 20h
ret
StackAllocate endp
Accessing heap is different, the heap is a kernel object, so when you call malloc()/new operator in your code a long call stack is called through windows code, the point is reaching RAM requires kernel help, the stack allocation above is actually not reaching RAM, all I did is subtracting a number of an existing value in RSP which is inside CPU so I did not go outside, the heap object in kernel returns a handle that Windows use to manage fragmented memory and in the end returns a void* to that memory
Hope that helped

Device Drivers vs the /dev + glibc Interface

I am looking to have the processor read from I2C and store the data in DDR in an embedded system. As I have been looking at solutions, I have been introduced to Linux device drivers as well as the GNU C Library. It seems like for many operations you can perform with the basic Linux drivers you can also perform with basic glibc system calls. I am somewhat confused when one should be used over the other. Both interfaces can be accessed from the user space.
When should I use a kernel driver to access a device like I2C or USB and when should I use the GNU C Library system functions?
The GNU C Library forwards function calls such as read, write, ioctl directly to the kernel. These functions are just very thin wrappers around the system calls. You could call the kernel all by yourself using inline assembly, but that is rarely helpful. So in this sense, all interactions with the kernel driver will go through these glibc functions.
If you have questions about specific interfaces and their trade-offs, you need to name them explicitly.
In ARM:
Privilege states are built into the processor and are changed via assembly commands. A memory protection unit, a part of the chip, is configured to disallow access to arbitrary ranges of memory depending on the privilege status.
In the case of the Linux kernel, ALL physical memory is privileged- memory addresses in userspace are virtual (fake) addresses, translated to real addresses once in privileged mode.
So, to access a privileged memory range, the mechanics are like a function call- you set the parameters indicating what you want, and then make a ('SVC')- an interrupt function which removes control of the program from userspace, gives it to the kernel. The kernel looks at your parameters and does what you need.
The standard library basically makes that whole process easier.
Drivers create interfaces to physical memory addresses and provide an API through the SVC call and whatever 'arguments' it's passed.
If physical memory is not reserved by a driver, the kernel generally won't allow anyone to access it.
Accessing physical memory you're not privileged to will cause a "bus error".
BTW: You can use a driver like UIO to put physical memory into userspace.

Theoretical embedded linux requirements

I come from a programmer background using Java, C#, C++, Javascript
I got my self a Raspberry Pi (Model 1 A, the one without ethernet) and played around for a while with it. I used Raspbian and Arch Linux ARM (since it was said it is small and customizable). Unfortunatly I didn't manage to configure them as I want to have them.
I am trying to build a nice looking (embedded) system with the only goal to start (boot) the Raspberry Pi fast and autostart a test application which will be written in C# (Mono), C++ (Qt), Java (Java Runtime) or something in JavaScript/HTML.
Since I was not able to get rid of all the log messages (i got rid of most), the tty login screen, the attempts of connecting to the network (although the Model 1 A does not have ethernet at all) booting was ugly and took long (+1 minute in some cases).
It seems I will have to build a minimum embedded linux but I have a lack in the theory of embedded linux elements and how they fit together.
My question: What are the theoretically required parts of an embedded linux holding either mono, qt, java runtime on a raspberry pi?
So far I know the following parts:
the hardware (raspberry pi model 1 A) + sd card
the sd card holds 2 partitions, 1 boot partition (fat32), 1 data partition (ext4)
a boot loader
a linux kernel (which can be optimized to the needs of a raspi)
But what then? My research got lost at "use a distro" what I don't want. What are the missing pieces between the kernel and starting an application?
An Embedded Linux system is comprised of many different parts that work together towards the same goal of making things work efficiently.
Ideally, that is not much different from a regular GNU/Linux system, but let's see in detail the building blocks of a generic embedded system.
For the following explanation, I am assuming as architecture ARM. What is written below may differ slightly from implementation to implementation, but is usually a common track for commercial embedded systems.
Blocks of a GNU/Linux Embedded System
Hardware
SoC
The SoC is where all the processing takes places, it is the main processing unit of the whole system and the only place that has "intelligence". It is in charge of using the other hardware and running your software.
It is made of various and heterogeneous sub-blocks:
Core + Caches + MMU - the "real" processor, e.g. ARM Cortex-A9. It's the main thing you will notice when choosing a SoC.
May be coadiuvated by e.g. a SIMD coprocessor like NEON.
Internal RAM - generally very small. Used in the first phase of the boot sequence.
Various "Peripherals" - connected via some interconnect
fabric/bus to the Core. These can span from a simple ADC to a 3D Graphics Accelerator. Examples of such IP cores are: USB, PCI-E, SGX, etc.
A low power/real time coprocessor - some systems offer one or more coprocessor thought either to help the main Core with real time tasks (e.g. industrial communication buses) or to handle low power states. Its/their architecture might (or not) be a relative of the Core's one.
External RAM
It is used by the SoC to store temporary data after the system has bootstrapped and during the bootstrap itself. It's usually the memory your embedded system uses during regular operation.
Non-Volatile Memory - optional
May or may not be present. In your case it's the SD card you mentioned. In other cases could be a NAND, NOR or SPI Dataflash memory (or any combination of them).
When present, it is often the regular source of data the SoC will read from and usually stores all the SW components needed for the system to work.
Could not be necessary/useful in some kind of applications.
External Peripherals
Anything not strictly related to the above.
Could be a MAC ID EEPROM, some relays, a webcam or whatever you can possibly imagine.
Software
First of all, we introduce what is called the bootchain, which is what happens as soon as you power up your SoC and - someway - tell it to start running. In the following list, the bootchain is the subsequent calls of point 1 to point 4.
Apart from specific/exotic implementations, it is more or less always the same:
Boot ROM code - a small (usually masked - aka factory impressed) memory contained in the SoC. The first thing the SoC will do when powered up is to execute the code in it.
This code will - generally according to external configuration pins - decide the so-called "boot strategy" or "boot order", which is where (and in what order) to look for additional code to be executed. The suitable mediums are disparate: USB storage devices, USB hosts, SD cards, NANDs, NORs, SPI dataflashes, Ethernets, UARTs, etc.
If none of the above contains something valid, the Boot ROM will usually issue a soft reset of the SoC, and so on.
The code in the medium is not, of course, executed in place: it gets copied into the Internal RAM then executed.
[The following two are contained in what we will call bootloader medium]
1st stage bootloader - it has just been copied by the Boot ROM into
the SoC's Internal RAM. Must be tiny enough to fit that memory
(usually well under 100kB). It is needed because the Boot ROM isn't
big enough and does not know what kind of External RAM the SoC is
attached to. Has the main important function of initializing the
External RAM and the SoC's external memory interface, as well as
other peripherals that may be of interest (e.g. disable watchdog
timers). Once done, it copies the next stage to the External RAM and
executes it. Depending on the context, could be called MLO, SPL or
else.
2nd stage bootloader - the "main" bootloader. Bigger (could be x10) than the 1st stage one, completes the initializiation of the
relevant peripherals (e.g. ethernet, additional storage media, LCD
displays). Allows a much more complicated logic for what to do next
and offers - depending on the level of sofistication - high level
facilities (filesystem/volume handling, data
copy-move-interpretation, LCD output, interactive console, failsafe
policies). Most of the times loads a Linux kernel (and related) into
memory from some medium and passes relevant information to it (e.g.
if not embedded, for newer kernels the DTB physical address is put
in the r2 register - the Kernel then reads the register and
retrieves the DTB)
Linux Kernel - the core of the operating system. Depending on the
hardware platform may or may not be a mainline ("official") version.
Is usually completed by built-in or loadable (from an external
source - free or not) modules. Initializes all the hardware needed for the complete system to work according to hardcoded configuration and the DT - enables MMU, orchestrates the whole system and accesses the hardware exlusively. According to the boot arguments
(cmdline - usually passed by the previous stage) and/or to compiled
options, the Kernel tries to mount a root file system. From the
rootfs, it will try to load an init (namely, /sbin/init - where / is
the just mounted rootfs).
Init and rootfs - init is the first non-Kernel task to be run, and
has PID 1. It initalizes literally everything you need to use your
system. In production embedded systems, it also starts the main
application. In such systems is either BusyBox or a custom crafted
application.
More on rootfs and distros
Rootfs contains all of your GNU/Linux systems that is not Kernel (apart from /lib/modules and other bits).
It contains all the applications that manage peripherals like Ethernet, WiFi, or external UMTS modems.
Contains the interactive part of the system, contains the user interface, and everything else you see when you boot a GNU/Linux system - embedded or not.
A "distro" is just a particular collection of userspace (non-Kernel) programs and libraries (usually) verified to work well one with the other, put toghether by a particular group of people.
Desktop distros usually also ship with a custom-tailored kernel and a bootloader. Examples are Fedora, Ubuntu, Debian, etc.
In the general sense of the term, nothing stops you from creating your own distro, which is what happens everytime a custom embedded system goes in production: through tools like Yocto or Buildroot (or by hand), in fact, you are able to decide the very particular collection (hence distro, distribution) of softwares fit for the purpose of the system.
To sum up and answer exactly to your question, the missing part you are looking for is init and the process of mounting the rootfs: the Kernel mounts - aka renders available to itself - via its drivers and the passed/builtin parameters - a given volume/partition (the ext4 data partition you mention) to the "/" mount point.
In this volume/partition there is a /sbin/init executable, which the Kernel executes.
This is the "Big Bang" of our GNU/Linux userspace system: the place where everything visible starts. Depending on the configuration scripts (usually located under /etc/init.d) the "application" you mention is either run automatically by init or by the user via a terminal/ssh/whatever that - again - init made you possible to use.