multi-threaded avahi resolving causes segfault - bonjour

I'm attempting to port my zeronconf-enabled C/C++ app to Linux, however I'm getting D-BUS related segfaults. I'm not sure if this is a bug in Avahi, my misuse of Avahi, or a bug in my code.
I am using a ZeroconfResolver object that encapsulates an AvahiClient,
AvahiSimplePoll, and AvahiServiceResolver. The ZeroconfResolver has a
Resolve function that first instantiates the AvahiSimplePoll, then
AvahiClient, and finally the AvahiServiceResolver. At each
instantiation I am checking for errors before continuing to the next.
After the AvahiServiceResolver has been successfully created it calls
avahi_simple_poll_loop with the AvahiSimplePoll.
This whole process works great when done synchronously but fails with
segfaults when multiple ZeroconfResolvers are being used at the same
time asynchronously (i.e I have multiple threads creating their own
ZeroconfResolver objects). A trivial adaptation of the object that
reproduces the segfaults can be seen in the code below (may not produce a
segfault right away, but in my use case it happens frequently).
I understand that "out of the box" Avahi is not thread safe, but
according to my interpretation of [1] it is safe to have multiple
AvahiClient/AvahiPoll objects in the same process as long as they are
not 'accessed' from more than one thread. Each ZeroconfResolver has
its own set of Avahi objects that do not interact with each other
across thread boundaries.
The segfaults occur in seemingly random functions within the Avahi
library. In general they happen within the avahi_client_new or
avahi_service_resolver_new functions referencing dbus. Does the Avahi wiki
mean to imply that the 'creation' of AvahiClient/AvahiPoll objects is
also not thread safe?
[1] http://avahi.org/wiki/RunningAvahiClientAsThread
#include <dispatch/dispatch.h>
#include <cstdio>
#include <sys/types.h>
#include <netinet/in.h>
#include <avahi-client/lookup.h>
#include <avahi-client/client.h>
#include <avahi-client/publish.h>
#include <avahi-common/alternative.h>
#include <avahi-common/simple-watch.h>
#include <avahi-common/malloc.h>
#include <avahi-common/error.h>
#include <avahi-common/timeval.h>
void resolve_reply(
AvahiServiceResolver *r,
AVAHI_GCC_UNUSED AvahiIfIndex interface,
AVAHI_GCC_UNUSED AvahiProtocol protocol,
AvahiResolverEvent event,
const char *name,
const char *type,
const char *domain,
const char *host_name,
const AvahiAddress *address,
uint16_t port,
AvahiStringList *txt,
AvahiLookupResultFlags flags,
void * context) {
assert(r);
if (event == AVAHI_RESOLVER_FOUND)
printf("resolve_reply(%s, %s, %s, %s)[FOUND]\n", name, type, domain, host_name);
avahi_service_resolver_free(r);
avahi_simple_poll_quit((AvahiSimplePoll*)context);
}
int main() {
// Run until segfault
while (true) {
// Adding block to conccurent GCD queue (managed thread pool)
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), [=]{
char name[] = "SomeHTTPServerToResolve";
char domain[] = "local.";
char type[] = "_http._tcp.";
AvahiSimplePoll * simple_poll = NULL;
if ((simple_poll = avahi_simple_poll_new())) {
int error;
AvahiClient * client = NULL;
if ((client = avahi_client_new(avahi_simple_poll_get(simple_poll), AVAHI_CLIENT_NO_FAIL, NULL, NULL, &error))) {
AvahiServiceResolver * resolver = NULL;
if ((resolver = avahi_service_resolver_new(client, AVAHI_IF_UNSPEC, AVAHI_PROTO_UNSPEC, name, type, domain, AVAHI_PROTO_UNSPEC, AVAHI_LOOKUP_NO_ADDRESS, (AvahiServiceResolverCallback)resolve_reply, simple_poll))) {
avahi_simple_poll_loop(simple_poll);
printf("Exit Loop(%p)\n", simple_poll);
} else {
printf("Resolve(%s, %s, %s)[%s]\n", name, type, domain, avahi_strerror(avahi_client_errno(client)));
}
avahi_client_free(client);
} else {
printf("avahi_client_new()[%s]\n", avahi_strerror(error));
}
avahi_simple_poll_free(simple_poll);
} else {
printf("avahi_simple_poll_new()[Failed]\n");
}
});
}
// Never reached
return 0;
}

One solution that seems to work fine is to add your own synchronization (a common mutex) around avahi_client_new, avahi_service_resolver_new and the corresponding free operations. It seems avahi does not claim those operation to be internally synchronized.
What is claimed is that independent objects do not interfere.
I had success with this approach, using a helper class with a static mutex. To be specific, a static member function (or free function) like this:
std::mutex& avahi_mutex(){
static std::mutex mtx;
return mtx;
}
and a lock around any section of code (as small as possible) doing free or new:
{
std::unique_lock<std::mutex> alock(avahi_mutex());
simple_poll = avahi_simple_poll_new()
}

Related

(GSM module SM5100B + ATMEGA16A interface) Trouble sending SMS using AT commands in C code

I am having trouble with my university project for embedded systems. The goal is to establish an interface between a SM5100B GSM module and ATMEGA16A microcontroller, using UART (which I did, using the correct ports from the datasheets), and to be able to send/receive simple SMS messages by sending AT commands trough the Tx and Rx ports from atmega to gsm and vice versa, via C code in Atmel.(not using hyperterminal)
When I tested the GSM module using TeraTerm, i was able to connect properly, and send AT commands easily, also managed to send and recieve an SMS with the SIM card inserted, so everything works fine.
Now I'm trying to do that using the microcontroller.
Here is the code I have so far:
#define F_CPU 7372800UL
#include <stdio.h>
#include <stdlib.h>
#include <util/delay.h>
#include <avr/io.h>
#include <string.h>
#define BAUD 9600
#define MYUBRR ((F_CPU/16/BAUD)-1) //BAUD PRESCALAR (for Asynch. mode)
void GSM_init(unsigned int ubrr ) {
/* Set baud rate */
UBRRH = (unsigned char)(ubrr>>8);
UBRRL = (unsigned char)ubrr;
/* Enable receiver and transmitter */
UCSRB = (1<<RXEN)|(1<<TXEN);
/* Set frame format: 8data, 2stop bit */
UCSRC = (1<<URSEL)|(1<<USBS)|(3<<UCSZ0);
}
void USART_Transmit(char data ) {
/* Wait for empty transmit buffer */
while ( !( UCSRA & (1<<UDRE)) );
/* Put data into buffer, sends the data */
UDR = data;
}
void USART_Transmits(char data[] ) {
int i;
for(i=0; i<strlen(data); i++) {
USART_Transmit(data[i]);
_delay_ms(300);
}
}
int main(void)
{
GSM_init(MYUBRR);
char text_mode[] = "AT+CMGF=1";
char send_sms[] = "AT+CMGS=";
char phone_number[] = "00385*********";
char sms[] = "gsm sadness";
USART_Transmits(text_mode);
_delay_ms(1000);
USART_Transmits(send_sms);
_delay_ms(1000);
USART_Transmit(34);//quotation mark "
//_delay_ms(300);
USART_Transmits(phone_number);
//_delay_ms(300);
USART_Transmit(34);//quotation mark "
//_delay_ms(300);
USART_Transmit(13);//enter
//_delay_ms(300);
USART_Transmits(sms);
_delay_ms(1000);
USART_Transmit(26);//ctrl+z
_delay_ms(300);
USART_Transmit(13);//enter
_delay_ms(3000);
while (1)
{
}
}
However, my code isn't working, it's not sending the message.
The functions for transmitting are taken from the datasheet and everywhere on the internet I search I find the same ones over and over again.
Is the problem in AT responses that I'm not reading correctly? Or in parsing AT commands to the serial port?
Can anybody help me understand where I'm going wrong with this, or where I can look for to understand how to make this work?

Perl XS garbage collection

I had to deal with a really old codebase in my company which had C++ apis exposed via perl.
In on of the code reviews, I suggested it was necessary to garbage collect memory which was being allocated in c++.
Here is the skeleton of the code:
char* convert_to_utf8(char *src, int length) {
.
.
.
length = get_utf8_length(src);
char *dest = new char[length];
.
.
// No delete
return dest;
}
Perl xs definition:
PROTOTYPE: ENABLE
char * _xs_convert_to_utf8(src, length)
char *src
int length
CODE:
RETVAL = convert_to_utf8(src, length)
OUTPUT:
RETVAL
so, I had a comment that the memory created in the c++ function will not garbage collected by Perl. And 2 java developers think it will crash since perl will garbage collect the memory allocated by c++. I suggested the following code.
CLEANUP:
delete[] RETVAL
Am I wrong here?
I also ran this code and showed them the increasing memory utilization, with and without the CLEANUP section. But, they are asking for exact documentation which proves it and I couldn't find it.
Perl Client:
use ExtUtils::testlib;
use test;
for (my $i=0; $i<100000000;$i++) {
my $a = test::hello();
}
C++ code:
#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "ppport.h"
#include <stdio.h>
char* create_mem() {
char *foo = (char*)malloc(sizeof(char)*150);
return foo;
}
XS code:
MODULE = test PACKAGE = test
char * hello()
CODE:
RETVAL = create_mem();
OUTPUT:
RETVAL
CLEANUP:
free(RETVAL);
I'm afraid that the people who wrote (and write) the Perl XS documentation probably consider it too obvious that Perl cannot magically detect memory allocation made in other languages (like C++) to document that explicitly. There's a bit in the perlguts documentation page that says that all memory to be used via the Perl XS API must use Perl's macros to do so that may help you argue.
When you write XS code, you're writing C (or sometimes C++) code. You still need to write proper C/C++, which includes deallocating allocated memory when appropriate.
The glue function you desire XS to create is the following:
void hello() {
dSP; // Declare and init SP, the stack pointer used by mXPUSHs.
char* mem = create_mem();
mXPUSHs(newSVpv(mem, 0)); // Create a scalar, mortalize it, and push it on the stack.
free(mem); // Free memory allocated by create_mem().
XSRETURN(1);
}
newSVpv makes a copy of mem rather than taking possession of it, so the above clearly shows that free(mem) is needed to deallocate mem.
In XS, you could write that as
void hello()
CODE:
{ // A block is needed since we're declaring vars.
char* mem = create_mem();
mXPUSHs(newSVpv(mem, 0));
free(mem);
XSRETURN(1);
}
Or you could take advantage of XS features such as RETVAL and CLEANUP.
SV* hello()
char* mem; // We can get rid of the block by declaring vars here.
CODE:
mem = create_mem();
RETVAL = newSVpv(mem, 0); // Values returned by SV* subs are automatically mortalized.
OUTPUT:
RETVAL
CLEANUP: // Happens after RETVAL has been converted
free(mem); // and the converted value has been pushed onto the stack.
Or you could also take advantage of the typemap, which defines how to convert the returned value into a scalar.
char* hello()
CODE:
RETVAL = create_mem();
OUTPUT:
RETVAL
CLEANUP:
free(RETVAL);
All three of these are perfectly acceptable.
A note on mortals.
Mortalizing is a delayed reference count decrement. If you were to decrement the SV created by hello before hello returns, it would get deallocated before hello returns. By mortalizing it instead, it won't be deallocated until the caller has a chance to inspect it or take possession of it (by increasing its reference count).

a lack of examples of using libmodbus functions

I am new to modbus. I have spent hours reading the Help(?) files, which never seem to give you an example! I am using C on a Raspberry Pi, model3 and have installed libmodbus. I am trying to talk to an epSolar solar panel controller via an FTDI USB to RS485 converter.
The epSolar docs say that the Read Input registers start at address 3000 and continue to 311D. I am trying to read 3104.
I modified the code below. It connects to the device but trying to read input register 0x04 always returns -1:
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <stdlib.h>
#include <errno.h>
#include <modbus.h>
enum {TCP, RTU};
int main(int argc, char *argv[])
{
int socket;
modbus_t *ctx;
modbus_mapping_t *mb_mapping;
int rc;
int use_backend;
int i;
uint16_t tab_reg[64];
use_backend = RTU;
printf("Waiting for Serial connection\n");
ctx = modbus_new_rtu("/dev/SOLAR", 115200, 'N', 8, 1);
modbus_set_slave(ctx, 0);
//modbus_connect(ctx);
if(modbus_connect(ctx) == -1)
{
fprintf(stderr, "Serial connection failed:
%s\n", modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
printf("Serial connection started!\n");
mb_mapping = modbus_mapping_new(MODBUS_MAX_READ_BITS, 0,
MODBUS_MAX_READ_REGISTERS, 0);
if(mb_mapping == NULL)
{
fprintf(stderr, "Failed to allocate the mapping: %s\n",
modbus_strerror(errno));
modbus_free(ctx);
return -1;
}
rc = modbus_read_input_registers(ctx, 1, 0x0A, tab_reg);
if(rc == -1)
{
fprintf(stderr, "%s\n", modbus_strerror(errno));
return -1;
}
for(i=0; i < rc; i++)
printf("reg[%d]=%d (0x%X)\n", i, tab_reg[i], tab_reg[i]);
modbus_mapping_free(mb_mapping);
modbus_free(ctx);
modbus_close(ctx);
return 0;
}
It connects fine and allocates the mapping, but rc is always -1 with error message that the port has timed out.
I have run out of ideas and feel like I am navigating through treacle!
Any help most appreciated.
I am also new to Modbus. With my current experience, make sure you are allocating enough memory for the tab_reg for storing the results. Also try setting the Debug mode on i.e modbus_set_debug(ctx, TRUE); to Check for the request and response code.
I know this is a really old question, but hopefully this answer will help anyone who lands here via a Google search.
I can see a few points that need some help.
As commented by Saad above, the modbus server ID above is incorrect. ID 0 is reserved for broadcast messages, which a slave will not respond to. Find out what the Modbus ID for the target device is, and use that.
I think what's tricking you is that you'll also always get a proper "connect" as long as the serial port you provided is valid. This isn't a connection to any particular device so much as it's a connection to the Modbus network port. You're getting a timeout because a response was expected by libmodbus, but no response was received on the wire.
There are several other little troubles in the code presented, but given the age of this post I almost feel like I'm nitpicking something the OP probably already solved. The big problem is the unworkable slave ID. Other minor problems include: unnecessary use of modbus_mapping (struct for use on server/slaves), possible misallocation of modbus_mapping (no space allocated for input registers).

Is select() + non-blocking write() possible on a blocking pipe or socket?

The situation is that I have a blocking pipe or socket fd to which I want to write() without blocking, so I do a select() first, but that still doesn't guarantee that write() will not block.
Here is the data I have gathered. Even if select() indicates that
writing is possible, writing more than PIPE_BUF bytes can block.
However, writing at most PIPE_BUF bytes doesn't seem to block in
practice, but it is not mandated by the POSIX spec.
That only specifies atomic behavior. Python(!) documentation states that:
Files reported as ready for writing by select(), poll() or similar
interfaces in this module are guaranteed to not block on a write of up
to PIPE_BUF bytes. This value is guaranteed by POSIX to be at least
512.
In the following test program, set BUF_BYTES to say 100000 to block in
write() on Linux, FreeBSD or Solaris following a successful select. I
assume that named pipes have similar behavior to anonymous pipes.
Unfortunately the same can happen with blocking sockets. Call
test_socket() in main() and use a largish BUF_BYTES (100000 is good
here too). It's unclear whether there is a safe buffer size like
PIPE_BUF for sockets.
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <sys/types.h>
#include <limits.h>
#include <stdio.h>
#include <sys/select.h>
#include <unistd.h>
#define BUF_BYTES PIPE_BUF
char buf[BUF_BYTES];
int
probe_with_select(int nfds, fd_set *readfds, fd_set *writefds,
fd_set *exceptfds)
{
struct timeval timeout = {0, 0};
int n_found = select(nfds, readfds, writefds, exceptfds, &timeout);
if (n_found == -1) {
perror("select");
}
return n_found;
}
void
check_if_readable(int fd)
{
fd_set fdset;
FD_ZERO(&fdset);
FD_SET(fd, &fdset);
printf("select() for read on fd %d returned %d\n",
fd, probe_with_select(fd + 1, &fdset, 0, 0));
}
void
check_if_writable(int fd)
{
fd_set fdset;
FD_ZERO(&fdset);
FD_SET(fd, &fdset);
int n_found = probe_with_select(fd + 1, 0, &fdset, 0);
printf("select() for write on fd %d returned %d\n", fd, n_found);
/* if (n_found == 0) { */
/* printf("sleeping\n"); */
/* sleep(2); */
/* int n_found = probe_with_select(fd + 1, 0, &fdset, 0); */
/* printf("retried select() for write on fd %d returned %d\n", */
/* fd, n_found); */
/* } */
}
void
test_pipe(void)
{
int pipe_fds[2];
size_t written;
int i;
if (pipe(pipe_fds)) {
perror("pipe failed");
_exit(1);
}
printf("read side pipe fd: %d\n", pipe_fds[0]);
printf("write side pipe fd: %d\n", pipe_fds[1]);
for (i = 0; ; i++) {
printf("i = %d\n", i);
check_if_readable(pipe_fds[0]);
check_if_writable(pipe_fds[1]);
written = write(pipe_fds[1], buf, BUF_BYTES);
if (written == -1) {
perror("write");
_exit(-1);
}
printf("written %d bytes\n", written);
}
}
void
serve()
{
int listenfd = 0, connfd = 0;
struct sockaddr_in serv_addr;
listenfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = htonl(INADDR_ANY);
serv_addr.sin_port = htons(5000);
bind(listenfd, (struct sockaddr*)&serv_addr, sizeof(serv_addr));
listen(listenfd, 10);
connfd = accept(listenfd, (struct sockaddr*)NULL, NULL);
sleep(10);
}
int
connect_to_server()
{
int sockfd = 0, n = 0;
struct sockaddr_in serv_addr;
if((sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket");
exit(-1);
}
memset(&serv_addr, '0', sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(5000);
if(inet_pton(AF_INET, "127.0.0.1", &serv_addr.sin_addr) <= 0) {
perror("inet_pton");
exit(-1);
}
if (connect(sockfd, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) {
perror("connect");
exit(-1);
}
return sockfd;
}
void
test_socket(void)
{
if (fork() == 0) {
serve();
} else {
int fd;
int i;
int written;
sleep(1);
fd = connect_to_server();
for (i = 0; ; i++) {
printf("i = %d\n", i);
check_if_readable(fd);
check_if_writable(fd);
written = write(fd, buf, BUF_BYTES);
if (written == -1) {
perror("write");
_exit(-1);
}
printf("written %d bytes\n", written);
}
}
}
int
main(void)
{
test_pipe();
/* test_socket(); */
}
Unless you wish to send one byte at a time whenever select() says the fd is ready for writes, there is really no way to know how much you will be able to send and even then it is theoretically possible (at least in the documentation, if not in the real world) for select to say it's ready for writes and then the condition to change in the time between select() and write().
Non blocking sends are the solution here and you don't need to change your file descriptor to non blocking mode to send one message in non-blocking form if you change from using write() to send(). The only thing you need to change is to add the MSG_DONTWAIT flag to the send call and that will make the one send non-blocking without altering your socket's properties. You don't even need to use select() at all in this case either since the send() call will give you all the information you need in the return code - if you get a return code of -1 and the errno is EAGAIN or EWOULDBLOCK then you know you can't send any more.
The Posix section you cite clearly states:
[for pipes] If the O_NONBLOCK flag is clear, a write request may cause the thread to block, but on normal completion it shall return nbyte.
[for streams, which presumably includes streaming sockets] If O_NONBLOCK is clear, and the STREAM cannot accept data (the STREAM write queue is full due to internal flow control conditions), write() shall block until data can be accepted.
The Python documentation you quoted can therefore only apply to non-blocking mode only. But as you're not using Python it has no relevance anyway.
The answer by ckolivas is the correct one but, having read this post, I thought I could add some test data for interest's sake.
I quickly wrote a slow reading tcp server (sleeping 100ms between reads) which did a read of 4KB on each cycle. Then a fast writing client which I used for testing various scenarios on write. Both were using select before read (server) or write (client).
This was on Linux Mint 18 running under a Windows 7 VM (VirtualBox) with 1GB of memory assigned.
For the blocking case
If a write of a "certain number of bytes" became possible, select returned and the write either completed in total immediately or blocked until it completed. On my system, this "certain number of bytes" was at least 1MB. On the OP's system, this was clearly much less (less than 100,000).
So select did not return until a write of at least 1MB was possible. There was never a case (that I saw) where select would return if a smaller write would subsequently block. Thus select + write(x) where x was 4K or 8K or 128K never write blocked on this system.
This is all very well of course but this was an unloaded VM with 1GB of memory. Other systems would be expected to be different. However, I would expect that writes below a certain magic number (PIPE_BUF perhaps), issued subsequent to a select, would never block on all POSIX compliant systems. However (again) I don't see any documentation to that effect so one can't rely on that behaviour (even though the Python documentation clearly does). As the OP says, it's unclear whether there is a safe buffer size like PIPE_BUF for sockets. Which is a pity.
Which is what ckolivas' post says even though I'd argue that no rational system would return from a select when only a single byte was available!
Extra information:
At no point (in normal operation) did write return anything other than the full amount requested (or an error).
If the server was killed (ctrl-c), the client side write would immediately return a value (usually less than was requested - no normal operation!) with no other indication of error. The next select call would return immediately and the subsequent write would return -1 with errno saying "Connection reset by peer". Which is what one would expect - write as much as you can this time, fail the next time.
This (and EINTR) appears to be the only time write returns a number > 0 but less than requested.
If the server side was reading and the client was killed, the server continued to read all available data until it ran out. Then it read a zero and closed the socket.
For the non-blocking case:
The behaviour below some magic value is the same as above. select returns, write doesn't block (of course) and the write completes in its totality.
My issue was what happens otherwise. The send(2) man page says that in non-blocking mode, send fails with EAGAIN or EWOULDBLOCK. Which might imply (depending on how you read it) that it's all or nothing. Except that it also says select may be used to determine when it is possible to send more data. So it can't be all or nothing.
Write (which is the same as send with no flags), says it can return less than requested. This nitpicking seems pedantic but the man pages are the gospel so I read them as such.
In testing, a non-blocking write with a value larger than some particular value returned less than requested. This value wasn't constant, it changed from write to write but it was always pretty large (> 1 to 2MB).

How do unix-like OS implement IPC shared memory?

guys. I am wondering how do unix-like OS implement shared memory? What is difference between accessing a normal user-space memory between accessing a memory unix in sytem IPC shared memory?
Process memory is protected: outside of your program, normally no one can access it. This involves "important" gimmicks: your program has to believe it has the whole addressable space usable for himself, which is not the case. As I understand it, the address space of a process is split into pages (4k blocks I think), and the kernel has some sort of index for those pages, which maps them to physical memory or other devices (like your hard drive, that's how you do memory-mapped files). Whenever your process tries to access a memory address, it first goes to that map to see where the address actually points, and then does the accesses as requested. And whenever the process tries to access a page the kernel hasn't mapped anywhere, you get a segmentation fault.
So since memory is somewhat abstracted away, the kernel can do all kinds of tricks with it. Shared memory gotta be a sort of special case, where the kernel is asked to map pages from different processes' address space to the same physical location.
Actually a memory using in process are protected. When two or more process have same thing then we map it and give it to a special memory segment. That memory segment can be access able from both process. That is the main concept of inter process communication using shared memory.Inter process communication using shared memoryBelow shows a small shared memory example. (The code is derived from John Fusco's book, The Linux Programmer's Toolbox, ISBN 0132198576, published by Prentice Hall Professional, March 2007, and used with the permission of the publisher.) The code implements a parent and child process that communicates via a shared memory segment.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/file.h>
#include <sys/mman.h>
#include <sys/wait.h>
void error_and_die(const char *msg) {
perror(msg);
exit(EXIT_FAILURE);
}
int main(int argc, char *argv[]) {
int r;
const char *memname = "sample";
const size_t region_size = sysconf(_SC_PAGE_SIZE);
int fd = shm_open(memname, O_CREAT | O_TRUNC | O_RDWR, 0666);
if (fd == -1)
error_and_die("shm_open");
r = ftruncate(fd, region_size);
if (r != 0)
error_and_die("ftruncate");
void *ptr = mmap(0, region_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
if (ptr == MAP_FAILED)
error_and_die("mmap");
close(fd);
pid_t pid = fork();
if (pid == 0) {
u_long *d = (u_long *) ptr;
*d = 0xdbeebee;
exit(0);
}
else {
int status;
waitpid(pid, &status, 0);
printf("child wrote %#lx\n", *(u_long *) ptr);
}
r = munmap(ptr, region_size);
if (r != 0)
error_and_die("munmap");
r = shm_unlink(memname);
if (r != 0)
error_and_die("shm_unlink");
return 0;
}
The difference between normal user space and shared memory space is that in the case of IPC shared memory is protected but other case is not.