Glancing at the source code of GNU C Library,I found the inet_ntoa is implementated with
static __thread char buffer[18]
My question is, since there is a need to use reeentrant inet_ntoa,why do not the author of GNU C Library use malloc to implementate it?
thanks.
The reason it's not using the heap is to conform with standards (POSIX) and other systems. The interface is just not such that you are supposed to free the buffer returned. It assumes static storage..
But by declaring it as thread local (with __thread), two threads do not conflict with each other if they happen to both be calling the function. This is glibc's workaround for the brokenness of the interface.
It's true that this is not re-entrant or consistent with the spirit of that term. If you have a recursive function that calls it, you cannot rely on the buffer being the same between calls. But it can be used by multiple threads, which often is good enough.
EDIT: By the way, I just remembered, there is a newer version of this function that uses a caller-provided buffer. See inet_ntop().
Related
Is it possible to tail call eBPF codes that use different modes?
For example, if I coded a code that printk("hello world") using kprobe,
would I be able to tail call a XDP code afterwards or vice versa?
I programmed something on eBPF that uses a socket buffer and seems like when I try to tail call another code that uses kprobe, it doesn't load the program.
I wanted to tail call a code that uses XDP_PASS after using a BPF.SOCKET_FILTER mode but seems like tail call isn't working.
I've been trying to figure this out but I can't find any documentations regarding tail calling codes that use different modes :P
Thanks in advance!
No, it is not.
Have a look at kernel commit 04fd61ab36ec, which introduced tail calls: the comment in the first piece of code (in internal kernel header bpf.h), defining the struct bpf_array, sets a owner_prog_type member, and explains the following in a comment:
/* 'ownership' of prog_array is claimed by the first program that
* is going to use this map or by the first program which FD is stored
* in the map to make sure that all callers and callees have the same
* prog_type and JITed flag
*/
So once the program type associated with a BPF program array, used for tail calls, has been defined, it is not possible to use it with other program types. Which makes sense, since different program types work with different context (packet data VS traced function context VS ...), can use different helpers, have return functions with different meanings, necessitate different checks from the verifier, ... So it's hard to see how jumping from one type to another would work. How could you start with processing a network packet, and all of a sudden jump to a piece of code that is supposed to trace some internals of the kernel? :)
Note that it is also impossible to mix JIT-ed and non-JIT-ed programs, as indicated by the owner_jited of the struct.
I'm looking at Rust as a replacement for C/C++ in hard realtime programming. There are two possible issues I've identified:
1) How to I avoid invoking Rust's GC? I've seen suggestions that I can do this by simply avoiding managed pointers and non-realtime-safe libraries (such as Rust's standard library) -- is this enough to guarantee my realtime task will never invoke the GC?
2) How do I map my realtime task to an OS thread? I know Rust's standard library implements an N:M concurrency model, but a realtime task must correspond directly with one OS thread. Is there a way to spawn a thread of this type?
1) How to I avoid invoking Rust's GC? I've seen suggestions that I can do this by simply avoiding managed pointers and non-realtime-safe libraries (such as Rust's standard library) -- is this enough to guarantee my realtime task will never invoke the GC?
Yes, avoiding # will avoid the GC. (Rust currently doesn't actually have the GC implemented, so all code avoids it automatically, for now.)
2) How do I map my realtime task to an OS thread? I know Rust's standard library implements an N:M concurrency model, but a realtime task must correspond directly with one OS thread. Is there a way to spawn a thread of this type?
std::task::spawn_sched(std::task::SingleThreaded, function) (the peculiar formatting will be fixed when #10095 lands), e.g.
use std::task;
fn main() {
do task::spawn_sched(task::SingleThreaded) {
println("on my own thread");
}
}
That said, Rust's runtime & standard libraries aren't set up for hard-realtime programming (yet), but you can run "runtimeless" using #[no_std] (example) which gives you exactly the same situation as C/C++, modulo language differences and the lack of a standard library (although Rust's FFI means that you can call into libc relatively easily, and the rust-core project is designed to be a minimal stdlib that doesn't even require libc to work).
I'd like to play with Unix system calls, ideally from Ruby. How can I do so?
I've heard about Fiddle, but I don't know where to begin / which C library should I attach it to?
I assume by "interactively" you mean via irb.
A high-level language like Ruby is going to provide wrappers for most kernel syscalls, of varying thickness.
Occasionally these wrappers will be very thin, as with sysread() and syswrite(). These are more or less equivalent to read(2) and write(2), respectively.
Other syscalls will be hidden behind thicker layers, such as with the socket I/O stuff. I don't know if calling UNIXSocket.recv() counts as "calling a syscall" precisely. At some level, that's exactly what happens, but who knows how much Ruby and C code stands between you and the actual system call.
Then there are those syscalls that aren't in the standard Ruby API at all, most likely because they don't make a great amount of sense to be, like mmap(2). That syscall is all about raw pointers to memory, something you've chosen to avoid by using a language like Ruby in the first place. There happens to be a third party Ruby mmap module, but it's really not going to give you all the power you can tap from C.
The syscall() interface Mat pointed out in the comment above is a similar story: in theory, it lets you call any system call in the kernel. But, if you don't have the ability to deal with pointers, lay out data precisely in memory for structures, etc., your ability to make useful calls is going to be quite limited.
If you want to play with system calls, learn C. There is no shortcut.
Eric Wong started a mailing list for system-level programming in Ruby. It isn't terribly active now, but you can get to it at http://librelist.com/browser/usp.ruby/.
Do the risks caused by bypassing Perl safe signals for example like shown in the second timeout example in the DBI documentation concern only the code that uses such bypassing?
The code in that example works hard to localize the change to just that section of code, or any code called from it.
There is not 100% guarantee that no code will be effected outside the code that bypasses safe signals, because signals are no longer safe. In the example the call being timed out is a DBI->connect. For most DBD's this will be implemented mostly in C, unless the C code can handle being aborted and tried again you might find that some data structures internal to the DBD, or the libraries it uses, are left in a inconstant state.
The chances of the example code going wrong is probably incredibly tiny. My personal anecdote on the issues is that I had used the traditional Perl signal handling for years before safe signals were introduced and for a long time I had never had a problem. I hadn't even been very cautious about what I did in my signal handlers. Then we managed to hit a data set that actually did trigger memory corruptions in about 1 out of ever 100 runs. Just modifying the signal handlers to use better practices, similar to those in the example, eliminated our issues.
What does that even mean? By using unsafe signals, you can corrupt Perl's internals and Perl variables. It can also cause problem if a non-reentrant C library call is interrupted.
This can lead to SEGFAULTs and other problems, and those may only manifest themselves outside the block where the timeout is in effect.
Is inet_aton Thread-Safe? I know according to UNP that POSIX doesn't require a lot of the Sockets API to be thread safe, and so I have to assume they're not, but in general how do I know if something is thread safe in Perl? To what extent do I need to lock library function that I call? And how do I lock them? When I try something like lock(&inet_aton) it gives me an error: Can't modify non-lvalue subroutine call in lock.
And yes, I've read: Thread-Safety of System Libraries
If you read the inet_aton manpage carefully you will see that this call does not use any shared state (contrary to the inet_ntoa function described in the same manpage), and thus should be thread safe.
That the function writes its result into a caller-provided structure also supports this.
Perl uses a thin wrapper on top of those functions and thus doesn't change the thread safety of the underlying library.
The function inet_aton doesn't have any state it keeps between function calls, so I don't see any reason why it wouldn't be thread safe (provided the arguments you pass it aren't shared between threads).