Is there a function to detect whether a given virtual address mapped by mmap is protected by mprotect? Accessing such an address will result in segmentation fault if PROT_NONE is set. So I'd like to first detect whether they're protected or not.
It's better if I don't need to introduce signal handlers. If there isn't any such function, any other lightweight solution is also fine. Thanks.
Related
Currently, I am trying to trace Logical Block Address (LBA) access per process. I am aware of biosnoop.py that probes "blk_start_request". With the program I try to intercept I only get to see kworkers.
Two ideas to resolve this problem:
Find out what process the kworker has its current task from. This seems rather complicated to do in ebpf (if possible at all).
Probe another function in the kernel where the LBA can be intercepted or obtained in any way. I tried looking around in the virtual file system but did not find anything useful. Any recommendations?
I pretty much understand how the CAN protocol works -- when two nodes attempt to use the network at the same time, the lower id can frame gets priority and the other node detects this and halts.
This seems to get abstracted away when using socketcan - we simply write and read like we would any file descriptor. I may be misunderstanding something but I've gone through most of the docs (http://lxr.free-electrons.com/source/Documentation/networking/can.txt) and I don't think it's described unambiguously.
Does write() block until our frame is the lowest id frame, or does socketcan buffer the frame until the network is ready? If so, is the user notified when this occurs or do we use the loopback for this?
write does not block for channel contention. It could block because of the same reasons a TCP socket write would (very unlikely).
The CAN peripheral will receive a frame to be transmitted from the kernel and perform the Medium Access Control Protocol (MAC protocol) to send it over the wire. SocketCAN knows nothing about this layer of the protocol.
Where the frame is buffered is peripheral/driver dependent: the chain kernel-driver-peripheral behaves as 3 chained FIFOs with their own control flow mechanisms, but usually, it is the driver that buffers (if it is needed) the most since the peripheral has less memory available.
It is possible to subscribe for errors in the CAN stack protocol (signaled by the so called "error frames") by providing certain flags using the SocketCAN interface (see 4.1.2 in your link): this is the way to get error information at application layer.
Of course you can check for a correctly transmitted frame by checking the loopback interface, but it is overkill, the error reporting mechanism described above should be used instead and it is easier to use.
I was wondering,
Is there anyway to force writing on a 'Read-Only' Modbus Register?
Does defining a Register as a 'Read-Only' is secure enough or can be bypassed??
Thanks for the answers!
The correct way to define a "read-only" analog variable in Modbus is to map it as an input register. There is no function code defined in Modbus to write to an input register.
For historical reasons, several vendors maps all their variables as holding registers, which are theoretically read/write, i.e, there's a Write Multiple Registers function. Whenever they map a read only variable as a holding register, they must assert that the write functions fail. However, there's no standard exception code for this, as a holding register should be read/write. This is only one of Modbus' idiosyncrasies.
Getting back to your question, if you map your variable as an input register, you can be sure that the protocol will not allow a master to write to it. If, for interoperability issues you map it as a holding register, the protocol will allow the master to use a write funcion to change its value, and it is up to you to block in your device implementation.
I find that neither my textbooks or my googling skills give me a proper answer to this question. I know it depends on the operating system, but on a general note: what happens and why?
My textbook says that a system call causes the OS to go into kernel mode, given that it's not already there. This is needed because the kernel mode is what has control over I/O-devices and other things outside of a specific process' adress space. But if I understand it correctly, a switch to kernel mode does not necessarily mean a process context switch (where you save the current state of the process elsewhere than the CPU so that some other process can run).
Why is this? I was kinda thinking that some "admin"-process was switched in and took care of the system call from the process and sent the result to the process' address space, but I guess I'm wrong. I can't seem to grasp what ACTUALLY is happening in a switch to and from kernel mode and how this affects a process' ability to operate on I/O-devices.
Thanks alot :)
EDIT: bonus question: does a library call necessarily end up in a system call? If no, do you have any examples of library calls that do not end up in system calls? If yes, why do we have library calls?
Historically system calls have been issued with interrupts. Linux used the 0x80 vector and Windows used the 0x2F vector to access system calls and stored the function's index in the eax register. More recently, we started using the SYSENTER and SYSEXIT instructions. User applications run in Ring3 or userspace/usermode. The CPU is very tricky here and switching from kernel mode to user mode requires special care. It actually involves fooling the CPU to think it was from usermode when issuing a special instruction called iret. The only way to get back from usermode to kernelmode is via an interrupt or the already mentioned SYSENTER/EXIT instruction pairs. They both use a special structure called the TaskStateSegment or TSS for short. These allows to the CPU to find where the kernel's stack is, so yes, it essentially requires a task switch.
But what really happens?
When you issue an system call, the CPU looks for the TSS, gets its esp0 value, which is the kernel's stack pointer and places it into esp. The CPU then looks up the interrupt vector's index in another special structure the InterruptDescriptorTable or IDT for short, and finds an address. This address is where the function that handles the system call is. The CPU pushes the flags register, the code segment, the user's stack and the instruction pointer for the next instruction that is after the int instruction. After the systemcall has been serviced, the kernel issues an iret. Then the CPU returns back to usermode and your application continues as normal.
Do all library calls end in system calls?
Well most of them do, but there are some which don't. For example take a look at memcpy and the rest.
Introduction:
We have an application in which Linux running on an ARM accepts data from an external processor which DMA's the data into the ARM's memory space. The ARM then needs to access that data from user-mode code.
The range of addresses must be physically contiguous as the DMA engine in the external processor does not support scatter/gather. This memory range is initially allocated from the ARM kernel via a __get_free_pages(GFP_KERNEL | __GFP_DMA,order) call as this assures us that the memory allocated will be physically contiguous. Then a virt_to_phys() call on the returned pointer gives us the physical address that is then provided to the external processor at the beginning of the process.
This physical address is known also to the Linux user mode code which uses it (in user mode) to call the mmap() API to get a user mode pointer to this memory area. Our Linux kernel driver then sees a corresponding call to its mmap routine in the driver's file_operations structure. The driver then retains the vm_area_struct "vma" pointer that is passed to it in the call to its mmap routine for use later.
When the user mode code receives a signal that new data has been DMA'd to this memory address it then needs to access it from user mode via the user mode pointer we got from the call to mmap() mentioned above. Before the user mode code does this of course the cache corresponding to this memory range must be flushed. To accomplish this flush the user mode code calls the driver (via an ioctl) and in kernel mode a call to flush_cache_range() is made:
flush_cache_range(vma,start,end);
The arguments passed to the call above are the "vma" which the driver had captured when its mmap routine was called and "start" and "end" are the user mode addresses passed into the driver from the user mode code in a structure provided to the ioctl() call.
The Problem:
What we see is that the buffer does not seem to be getting flushed as we are seeing what appears to be stale data when accesses from user mode are made. As a test rather than getting the user mode address from a mmap() call to our driver we instead call the mmap() API to /dev/mem. In this case we get uncached access to the buffer (no flushing needed) and then everything works perfectly.
Our kernel version is 3.8.3 and it's running on an ARM 9. Is there a logical eror in the approach we are attempting?
Thanks!
I have a few question after which i might be able to answer:
1) How do you use "PHYSICAL" address in your mmap() call? mmap should have nothing to do with physical addresses.
2)What exactly do you do to get user virtual addresses in your driver?
3)How do you map these user virtual addresses to physical addresses and where do you do it?
4)Since you preallocate using get_free_pages(), do you map it to kernel space using ioremap_cache()?