I am modifying a tool that currently opens files and reads them with fread() to use memory-mapped files. This program frequently reads from devices that may have I/O errors. Currently we catch these with errors returned by fread(). How do I/O errors get reported with memory-mapped files?
The Linux man page referenced by vy32 explicitly states that SIGSEGV is generated on write failure (e.g. no disk space), but it is unclear whether read failures generate such errors (e.g. when removable media has been physically removed). Wikipedia seems to be more specific on that:
I/O errors on the underlying file (e.g. its removable drive is unplugged or optical media is ejected, disk full when writing, etc.) while accessing its mapped memory are reported to the application as the SIGSEGV/SIGBUS signals on POSIX, and the EXECUTE_IN_PAGE_ERROR structured exception on Windows. All code accessing mapped memory must be prepared to handle these errors, which don't normally occur when accessing memory.
POSIX specification of mmap does not require that the signal is delivered on error but leaves such possibility for implementations:
An implementation may generate SIGBUS signals when a reference would cause an error in the mapped object, such as out-of-space condition.
Okay, it looks like SIGSEGV or SIGBUS is generated when there is an attempt made to access mapped memory that is not available.
Related
I need to understand the difference between both EAGAIN and EWOULDBLOCK as I have seen many source code are checking against EAGAIN only (may be both the codes represent same number, Correct me here.)
My part of knowledge:
For blocking socket if sender buffer is full and receiver is not receiving any data,Sender will get hanged if call send(). This is because once data is read by the receiver the space it was using in the buffer is made available for new data. If your socket is in 'non blocking' mode then the 'send()' will fail with 'EAGAIN' or 'EWOULDBLOCK'.
Are they always the same number or is there any scenario where they need to be treated differently. ?
In short: they're almost always the same value, but for portability it's recommended to check for both values (and treat both values the same way).
For most systems, EAGAIN and EWOULDBLOCK will be the same. There are only a few systems in which they are different, and you can see the list of those systems in this answer.
Even the errno manpage mentions that they "may be the same [value]".
Historically, however, EWOULDBLOCK was defined for "operation would block" - that is, the operation would have blocked, but the descriptor was placed in non-blocking mode. EAGAIN originally indicated when a "temporary resource shortage made an operation impossible". The example used by the gnu documentation is when there are not enough resources to fork(). Because the resource shortage was expected to be temporary, a subsequent attempt to perform the action might succeed (hence the name "again").
Practically speaking, those types of temporary resource shortages are not that common (but pretty serious when they do occur).
Most systems define these values as the same, and the systems which don't will become more and more uncommon in the future. Nevertheless, for portability reasons you should check for both values, but you should also treat both errors in the same way. As the GNU documentation states:
Portability Note: In many older Unix systems ... [EWOULDBLOCK was] a distinct error code different from EAGAIN. To make your program portable, you should check for both codes and treat them the same.
They are functionally the same. The reason for the two different names is historic going back to the 1980s. EWOULDBLOCK was used on BSD/Sun variants of Unix, and EAGAIN was the AT&T System V error code.
For a compiled binary on a particular system the codes should have the same value. The reason both names are defined in include files is for source code portability.
They are the same.
Defined in the include/uapi/asm-generic/errno.h file:
#define EWOULDBLOCK EAGAIN /* Operation would block */
Trying to understand why there are ioctl calls in socket.c ? I can see a modified kernel that I am using, it has some ioctl calls which load in the required modules when the calls are made.
I was wondering why these calls ended up in socket.c ? Isn't socket kind of not-a-device and ioctls are primarily used for device.
Talking about 2.6.32.0 heavily modified kernel here.
ioctl suffers from its historic name. While originally developed to perform i/o controls on devices, it has a generic enough construct that it may be used for arbitrary service requests to the kernel in context of a file descriptor. A file descriptor is an opaque value (just an int) provided by the kernel that can be associated with anything.
Now if you treat a file descriptor and think of things as files, which most *nix constructs do, open/read/write/close isn't enough. What if you want to label a file (rename)? what if you want to wait for a file to become available (ioctl)? what if you want to terminate everything if a file closes (termios)? all the "meta" operations that don't make sense in the core read/write context are lumped under ioctls; fctls; etc. unless they are so frequently used that they deserve their own system call (e.g. flock(2) functionality in BSD4.2)
From local device testing, I've seen that writing a file to the iOS file system (regardless of how low level the call you use) will often return success before the file is fully committed to the flash. Meaning, if you hard reset the device then reboot, your file could be rolled back (if the write completed or was atomic) or corrupted. What is the source of this delay (documentation appreciated, I haven't been able to find anything), and is there a way to get feedback when the actual filesystem write is completed. For instance, I'd like to acknowledge receipt and storage of a piece of data from a remote server, but I find that acknowledging it after write "reports" success could result in data loss in the event of a hard crash or power failure.
Since this is a 4 years old questions, I'll provide not only the answer, but also the path I took while searching for it.
I was not able to find any clear explanation in the official documentation: File System Programming Guide. There was only a clue in the Performance Tips section. It states that:
Apps can call the BSD fcntl function with the F_NOCACHE flag to enable or disable caching for a file. For more information about this function, see fcntl.
Enabling the F_NOCACHE flag does not solve the problem you're stating, however, the manual for fcntl method states there's an option that you might just find interesting:
F_FULLFSYNC Does the same thing as fsync(2) then asks the drive to flush all buffered data to the permanent storage device
(from man fcntl, see here).
I've checked the manual for fsync for more details. It has given me, eventually, the clearest and most understandable explanation of both the problem and the solution:
Note that while fsync() will flush all data from the host to the drive (i.e. the "permanent storage device"), the drive itself may not physically write the data to the platters for quite some time and it may be written in an out-of-order sequence.
Specifically, if the drive loses power or the OS crashes, the application may find that only some or none of their data was written. The disk drive may also re-order the data so that later writes may be present, while earlier writes are not.
This is not a theoretical edge case. This scenario is easily reproduced with real world workloads and drive power failures.
For applications that require tighter guarantees about the integrity of their data, Mac OS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC fcntl asks the drive to flush all buffered data to permanent storage. Applications, such as databases, that require a strict ordering of writes should use F_FULLFSYNC to ensure that their data is written in the order they expect.
(from man fsync, see here).
Yeah, it's definitely not a theoretical edge case. Thankfully, once you know the problem, the solution is trivial:
let filePath: String = "your file path"
// you can use other option than read-write
let fd = open(String(path.utf8), O_RDWR)
// if fd is -1, there was an error opening file, handle it as you wish
guard fd != -1 else { return }
// syncResult is -1 if sync operation failed, handle it as you wish
let syncResult = fcntl(fd, F_FULLFSYNC)
// don't forget to close opened file
close(fd)
Once fcntl finishes, your data will be saved.
Notice this operation is slower than a usual writing to file (via NSFileManager or writeToURL methods family). In case of performance issues, it's best to move writing to background thread.
Here's a passage from the book
When executing kernel code, the system is in kernel-space execut-
ing in kernel mode.When running a regular process, the system is in user-space executing
in user mode.
Now what really is a kernel code and user code. Can someone explain with example?
Say i have an application that does printf("HelloWorld") now , while executing this application, will it be a user code, or kernel code.
I guess that at some point of time, user-code will switch into the kernel mode and kernel code will take over, but I guess that's not always the case since I came across this
For example, the open() library function does little except call the open() system call.
Still other C library functions, such as strcpy(), should (one hopes) make no direct use
of the kernel at all.
If it does not make use of the kernel, then how does it make everything work?
Can someone please explain the whole thing in a lucid way.
There isn't much difference between kernel and user code as such, code is code. It's just that the code that executes in kernel mode (kernel code) can (and does) contain instructions only executable in kernel mode. In user mode such instructions can't be executed (not allowed there for reliability and security reasons), they typically cause exceptions and lead to process termination as a result of that.
I/O, especially with external devices other than the RAM, is usually performed by the OS somehow and system calls are the entry points to get to the code that does the I/O. So, open() and printf() use system calls to exercise that code in the I/O device drivers somewhere in the kernel. The whole point of a general-purpose OS is to hide from you, the user or the programmer, the differences in the hardware, so you don't need to know or think about accessing this kind of network card or that kind of display or disk.
Memory accesses, OTOH, most of the time can just happen without the OS' intervention. And strcpy() works as is: read a byte of memory, write a byte of memory, oh, was it a zero byte, btw? repeat if it wasn't, stop if it was.
I said "most of the time" because there's often page translation and virtual memory involved and memory accesses may result in switched into the kernel, so the kernel can load something from the disk into the memory and let the accessing instruction that's caused the switch continue.
I have basic doubt regarding executable stored in ROM.
As I know the executable with text and RO attributes is stored in ROM. Question is as ROM is for Read Only Memory, what happens if there is situation where the code needs to write into memory?
I am not able to conjure up any example to cite here (probably I am ignorant of such situation or I am missing out basic stuff ;) but any light on this topic can greatly help me to understand! :)
Last off -
1. Is there any such situation?
2. In such a case is copying the code from ROM to RAM is the answer?
Answer with some example can greatly help..
Many thanks in advance!
/MS
Read-only memory is read only because of hardware restrictions. The program might be in an EEPROM, flash memory protected from writes, a CD-ROM, or anything where the hardware physically disallows writing. If software writes to ROM, the hardware is incapable of changing the stored data, so nothing happens.
So if a software program in ROM wants to write to memory, it writes to RAM. That's the only option. If a program is running from ROM and wants to change itself, it can't because it can't write to ROM. But yes, the program can run from RAM.
In fact, running from ROM is rare except in the smallest embedded systems. Operating systems copy executable code from ROM to RAM before running it. Sometimes code is compressed in ROM and must be decompressed into RAM before running. If RAM is full, the operating system uses paging to manage it. The reason running from ROM is so rare is because ROM is slower than RAM and sometimes code needs to be changed by the loader before running.
Note that if you have code that modifies itself, you really have to know your system. Many systems use data-execution prevention (DEP). Executable code goes in read+execute areas of RAM. Data goes in read+write areas. So on these systems, code can never change itself in RAM.
Normally only program code, constants and initialisation data are stored in ROM. A separate memory area in RAM is used for stack, heap, etc.
There are few legitimate reasons why you would want to modify the code section at runtime. The compiler itself will not generate code that requires that.
Your linker will have an option to generate a MAP file. This will tell you where all memory objects are located.
The linker chooses where to locate based on a linker script (which you can customise to organise memory as you require). Typically on a FLASH based microcontroller code and constant data will be placed in ROM. Also placed in ROM are the initialisation data for non-zero initialised static data, this is copied to RAM before main() is called. Zero initialised static data is simply cleared to zero before main().
It is possible to arrange for the linker to locate some or all of the code in ROM and have the run-time start-up code copy it to RAM in the same way as the non-zero static data, but the code must either be relocatable or be located to RAM in the first instance, you cannot usually just copy code intended to run from ROM to RAM and expect it to run since it may have absolute address references in it (unless perhaps your target has an MMU and can remap the address space). Locating in RAM on micro-controllers is normally done to increase execution speed since RAM is typically faster than FLASH when high clock speeds are used, producing fewer or zero wait states. It may also be used when code is loaded at runtime from a filesystem rather than stored in ROM. Even when loaded into RAM, if the processor has an MMU it is likely that the code section in RAM section will be marked read-only.
Harvard architecture microcontrollers
Many small microcontrollers (Microchip PIC, Atmel AVR, Intel 8051, Cypress PSoC, etc.) have a Harvard architecture.
They can only execute code from the program memory (flash or ROM).
It's possible to copy any byte from program memory to RAM.
However, (2) copying executable instructions from ROM to to RAM is not the answer -- with these small microcontrollers, the program counter always refers to some address in the program memory. It's not possible to execute code in RAM.
Copying data from ROM to RAM is pretty common.
When power is first applied, a typical firmware application zeros all the RAM and then copies the initial values of non-const global and static variables out of ROM into RAM just before main() starts.
Whenever the application needs to push a fixed string out the serial port, it reads that string out of ROM.
With early versions of these microcontrollers, an external "device programmer" connected to the microcontroller is the only way change the program.
In normal operation, the device was nowhere near a "device programmer".
If the software running on the microcontroller needed to write to program memory ROM -- sorry, too bad --
it was impossible.
Many embedded systems had non-volatile EEPROM that the code could write to -- but this was only for storing data values. The microcontroller could only execute code in the program ROM, not the EEPROM or RAM.
People did may wonderful things with these microcontrollers, including BASIC interpreters and bytecode Forth interpreters.
So apparently (1) code never needs to write to program memory.
With a few recent "self-programming" microcontrollers (from Atmel, Microchip, Cypress, etc.),
there's special hardware on the chip that allows software running on the microcontroller to erase and re-program blocks of its own program memory flash.
Some few applications use this "self-programming" feature to read and write data to "extra" flash blocks -- data that is never executed, so it doesn't count as self-modifying code -- but this isn't doing anything you couldn't do with a bigger EEPROM.
So far I have only seen two kinds of software running on Harvard-architecture microcontrollers that write new executable software to its own program Flash: bootloaders and Forth compilers.
When the Arduino bootloader (bootstrap loader) runs and detects that a new application firmware image is available, it downloads the new application firmware (into RAM), and writes it to Flash.
The next time you turn on the system it's now running shiny new version 16.98 application firmware rather than clunky old version 16.97 application firmware.
(The Flash blocks containing the bootloader itself, of course, are left unchanged).
This would be impossible without the "self-programming" feature of writing to program memory.
Some Forth implementations run on a small microcontroller, compiling new executable code and using the "self-programming" feature to store it in program Flash -- a process somewhat analogous to the JVM's "just-in-time" compiling.
(All other languages seem to require a compiler far too large and complicated to run on a small microcontroller, and therefore have a edit-compile-download-run cycle that takes much more wall clock time).