bpf resource limit using setrlimit - ebpf

When writing bpf programs, some online tutorials always use
struct rlimit rlim_new = {
.rlim_cur = RLIM_INFINITY,
.rlim_max = RLIM_INFINITY,
};
setrlimit(RLIMIT_MEMLOCK, &rlim_new);
to remove memory usage limitation for the bpf programs. This makes the program require root privilege. I wonder if there is something equivalent that does not require root privilege.
Thanks,
Peng.

Not possible
From man setrlimit:
The getrlimit() and setrlimit() system calls get and set resource limits respectively. Each resource has an associated soft and hard limit, as defined by the rlimit structure:
struct rlimit {
rlim_t rlim_cur; /* Soft limit */
rlim_t rlim_max; /* Hard limit (ceiling for rlim_cur) */
};
The soft limit is the value that the kernel enforces for the corresponding resource. The hard limit acts as a ceiling for the soft limit: an unprivileged process may only set its soft limit to a value in the range from 0 up to the hard limit, and (irreversibly) lower its hard limit. A privileged process (under Linux: one with the CAP_SYS_RESOURCE capability) may make arbitrary changes to either limit value.
As you can read, a non-root process (or rather, without the relevant capacity) can only lower its memory limit. This answers your question: There is no equivalent for unprivileged users. Which makes sense, because the purpose of the memory limit is to prevent unprivileged users to harm the system in the first place, and allowing them to bypass the limit would kind of defeat that objective.
Seldom an issue
It is usually not an issue, because most eBPF-related operations require some privileges anyway. It used to be CAP_SYS_ADMIN, it is now a combination of CAP_SYS_ADMIN, CAP_BPF, CAP_NET_ADMIN, CAP_PEFMON depending on the program types and features used. One notable exception are eBPF programs attached to network sockets, which may be attached without privileges, if the kernel.unprivileged_bpf_disabled control knob has been set accordingly and if the program does not use forbidden features, see also this answer.
About to change
Note also that the memory accounting for eBPF objects is changing, and newer kernels (starting with 5.11) will use cgroup-based memory usage, so the call to setrlimit() for eBPF objects will become obsolete.

Related

paging and binding schemes in memory management

The concept of paging in memory management can be used with which all schemes of binding?
By binding, I mean "mapping logical addresses to physical addresses". In my knowledge there are three types of binding schemes compile time, load time and execution time binding.
Paging is not involved in compiling, so we can rule that out.
Load time can have to meanings - combining the object modules of a program and libraries to produce an executable image (program) with no unresolved symbols (unix definition) OR transferring a program into memory so it may execute (non-unix).
What unix calls loading, some other systems call link editting.
Unix loading/link-editting is really part of compiling so doesn't involve paging at all. This operation does need to know the valid program addresses it can assign, which will permit the program to load. Conventionally these are from 0 to a very large number like 2^31 or 2^47.
Transferring an image to memory and executing can be considered either phases of the same thing, or in demand loading environments, exactly the same thing. Either way, the bit of the system that prepares the program address space has to fill out a set of tables which relate a program address to a physical address.
The program address of main() might be 0x12345; which might be viewed as offset 0x345 from page 0x12. The operating system might attach that to physical page 0x100, meaning that main() might temporarily be at 0x100345. Temporarily, because the operating system is free to change this relation (conventionally called a mapping) at any time.
The dynamic nature of these mappings is a positive attribute of paging, as it permits the system to reformulate its use of physical memory to meet changing demands.

Why no_new_privs bit is required with seccomp? example of theoretical exploit

I've seen that before using seccomp mode filter you have to set this bit, because it guarantees that a child process can't be executed with greater privileges compared to the parent's ones. But still I can't figure out an exploitation example. Could you show me one?
THEORETICAL SCENARIO: I have a program which can set seccomp filter mode without set no_new_privs bit.
GOAL: show a program which exploits it
This requirement ensures that an
unprivileged process cannot apply a malicious filter and then
invoke a set-user-ID or other privileged program using
execve(2), thus potentially compromising that program. (Such
a malicious filter might, for example, cause an attempt to use
setuid(2) to set the caller's user IDs to nonzero values to
instead return 0 without actually making the system call.
Thus, the program might be tricked into retaining superuser
privileges in circumstances where it is possible to influence
it to do dangerous things because it did not actually drop
privileges.)

How do operating systems allow userspace programs to interact with kernelspace programs?

This isn't quite a question about a specific OS, but let's take Windows as an example. A userspace program uses the Windows API to communicate with kernelspace. However, I don't understand how that's possible. The API, according to MS websites, lives in userspace. In order to access kernelspace it has to be in kernelspace, if I understand it correctly. So what is the mechanism by which the windows API gets extra privileges to speak to kernelspace? In which space does that mechanism operate? Is this sort of thing universal to all modern PC OS's?
As you're already aware there a bunch of facilities exposed to userspace programs by the Windows kernel. (If you're curious there's a list of system calls). These system calls are all identified by a unique number, which isn't part of the publicly documented interface given by Microsoft. Instead when you call a publicly exposed function from your program there's a DLL installed when you install (or update) Windows that has an entry point which is just a normal, unprivileged user mode function call. This DLL knows the mappings between public interfaces and the available system calls in the currently running kernel. These mappings are not always 1:1 which allows for tweaks and enhancements without breaking existing code using stable interfaces.
When some userland code calls one of these functions its role is to prepare arguments for the system call and then initiate the jump into kernel mode. How exactly that jump occurs is specific to the architecture that Windows is currently running on. In fact it varies not just between x86 and Arm but between AMD and Intel x86 systems even. I'll talk just about the modern Intel x86 32-bit case (using the SYSENTER instruction) here for simplicity. On x86 most of the other variations are relatively minor, for instance int 2Eh was used prior to SYSENTER support.
Early in boot up the operating system does a bunch of work to prepare for enabling a userland and system calls from it. Understanding this is critical to understanding how system calls really work.
First let's rewind a little and consider what exactly we mean by userland and kernelmode. On x86 when we talk about privileged vs un-privileged code we talk about "rings". There are actually 4 (ignoring hypervisors) but for various reasons nobody really used anything but ring0 (kernel) and ring3 (userland). When we run code on x86 the address that's being executed (EIP) and data that's being read/written come from segments.
Segments are mostly just a historical accident left over from the days before virtual addressing on x86 was a thing. They are however important for us here because there are special registers that define which segments are currently being used when we execute instructions or otherwise reference memory. Segments on x86 are all defined in a big table, called the Global Descriptor Table or GDT. (There's also a local descriptor table, LDT, but that's not going to further the current discussion here). The important point for our discussion here is that the (arcane) layout of the table entries include 2 bits, called DPL which define the privilege level of the currently active segment. You'll notice that 2 bits is exactly enough to define 4 levels of privilege.
So in short when we talk about "executing in kernel mode" we really just mean that our active code segment (CS) and data segment selectors point to entries in the GDT which have DPL set to 0. Likewise for userland we have a CS and data segment selectors pointing to GDT entries with DPL set to 3 and no access to kernel addresses. (There are other selectors too, but to keep it simple we'll just consider "code" and "data" for now).
Back to early on during kernel boot up: during start up the kernel creates the GDT entries we need. (These have to be laid out in a specific order for SYSENTER to work, but that's mostly just an implementation detail). There are also some "machine specific registers" that control how our processor behaves. These can only be set by privileged code. Three of them that are important here are:
IA32_SYSENTER_ESP
IA32_SYSENTER_EIP
IA32_SYSENTER_CS
Recall that we've got some code runnig in userland (ring3) that wants to transition to ring0. Let's assume that it has saved any registers that it needs to per the calling convention and put arguments into the right registers that the call expects. We then hit the SYSENTER instruction. (Actually it uses KiFastSystemCall I think). The SYSENTER instruction is special. It modifies the current code and data segment selectors based on the value that the kernel setup in the machine specific register IA32_SYSENTER_CS. (The stack/data segument values are computed as an offset of IA32_SYSENTER_CS). Subsequently the stack pointer itself (ESP) is set to the kernel stack that was setup for handling system calls earlier on and saved into the MSR IA32_SYSENTER_ESP and likewise for EIP the instruction pointer from IA32_SYSENTER_EIP.
Since the CS selector now points to a GDT entry with DPL set to 0 and EIP points to kernel mode code on a kernel stack we're running in the kernel at this point.
From here onwards the kernel mode code can read and write memory from both kernel and userspace (with some appropriate caution) to undertake the actual work needed to perform the system call. The arguments to the system call can be read from registers etc. according to the calling convention, but any arguments that are actually pointers back to userland or handles to kernel objects can be accessed to read larger blocks of data too.
When the system call is over the process is basically reversed and we end up back in userland with DPL 3 for the selectors.
Its the CPU that is acts as intermediate for transfer of information between user memory space(accessible in user mode) to protected memory space(accessible in kernel mode), via CPU registers.
Here's an Example:
Suppose a user writes a program in higher level language. Now when execution of the program happens, CPU generates the virtual addresses.
Now before any read/write operation occurs, the virtual address, is converted to physical address. Because the translation mechanism(memory management unit), is only accessible in kernel mode, cause its stored in protected memory, the translation occurs in kernel mode and the physical address is finally saved into some register of the CPU, and only then a read/write operation occurs.

Lock-free shared variable in Swift? (functioning volatile)

The use of Locks and mutexes is illegal in hard real-time callbacks. Lock free variables can be read and written in different threads. In C, the language definition may or may not be broken, but most compilers spit out usable assembly code given that a variable is declared volatile (the reader thread treats the variable as as hardware register and thus actually issues load instructions before using the variable, which works well enough on most cache-coherent multiprocessor systems.)
Can this type of variable access be stated in Swift? Or does in-line assembly language or data cache flush/invalidate hints need to be added to the Swift language instead?
Added: Will the use of calls to OSMemoryBarrier() (from OSAtomic.h) before and after and each use or update of any potentially inter-thread variables (such as "lock-free" fifo/buffer status counters, etc.) in Swift enforce sufficiently ordered memory load and store instructions (even on ARM processors)?
As you already mentioned, volatile only guarantees that the variable will not get cached into the registries (will get treated itself as a register). That alone does not make it lock free for reads and writes. It doesn't even guarantees it's atomicity, at least not in a consistent, cross-platform way.
Why? Instruction pipelining and oversizing (e.g using Float64 on a platform that has 32bit, or less, floating-point registers) first comes to mind.
That being said, did you considered using OSAtomic?

How can I limit the number of blocks written in a Write_10 command?

I have a product that is basically a USB flash drive based on an NXP LPC18xx microcontroller. I'm using a library provided from the manufacturer (LPCOpen) that handles the USB MSC and the SD card media (which is where I store data).
Here is the problem: Internally the LPC18xx has a 64kB (limited by hardware) buffer used to cache reads/writes which means it can only cache up to 128 blocks(512B) of memory. The SCSI Write-10 command has a total-blocks field that can be up to 256 blocks (128kB). When originally testing the product on Windows 7 it never writes more than 128 blocks at a time but when tested on Linux it sometimes writes more than 128 blocks, which causes the microcontroller to crash.
Is there a way to tell the host OS not to request more than 128 blocks? I see references[1] to a Read-Block-Limit command(05h) but it doesn't seem to be widely supported. Also, what sense key would I return on the Write-10 command to tell Linux the write is too large? I also see references to a block limit VPD page in some device spec sheets but cannot find a lot of documentation about how it is implemented.
[1]https://en.wikipedia.org/wiki/SCSI_command
Let me offer a disclaimer up front that this is what you SHOULD do, but none of this may work. A cursory search of the Linux SCSI driver didn't show me what I wanted to see. So, I'm not at all sure that "doing the right thing" will get you the results you want.
Going by the book, you've got to do two things: implement the Block Limits VPD and handle too-large transfer sizes in WRITE AND READ.
First, implement the Block Limits VPD page, which you can find in late revisions of SBC-3 floating around on the Internet (like this one: http://www.13thmonkey.org/documentation/SCSI/sbc3r25.pdf). It's probably worth going to the t10.org site, registering, and then downloading the last revision (http://www.t10.org/cgi-bin/ac.pl?t=f&f=sbc3r36.pdf).
The Block Limits VPD page has a maximum transfer length field that specifies the maximum number of blocks that can be transferred by all the READ and WRITE commands, and basically anything else that reads or writes data. Of course the downside of implementing this page is that you have to make sure that all the other fields you return are correct!
Second, when handling READ and WRITE, if the command's transfer length exceeds your maximum, respond with an ILLEGAL REQUEST key, and set the additional sense code to INVALID FIELD IN CDB. This behavior is indicated by a table in the section that describes the Block Limits VPD, but only in late revisions of SBC-3 (I'm looking at 35h).
You might just start with returning INVALID FIELD IN CDB, since it's the easiest course of action. See if that's enough?