I am using the Linux sendmsg() function to send a file-descriptor to another process over a Unix Socket along with some data payload. I make multiple calls to sendmsg. In the recvmsg() companion call inside the destination process, I get the file descriptor using something like "fdptr = (int *) CMSG_DATA(cmsg); memcpy(fdptr, myfds, NUM_FD * sizeof(int));" What I am noticing that each time I look at the file descriptor, the file descriptor is yet a DIFFERENT number than it was in the prior recvmsg() call.
My question: Is the destination process holding open a bunch of open descriptors to the same file/hardware?? Do I need to close the descriptors?
What would happen if I was not to try to copy the descriptor with "memcpy(fdptr, myfds, NUM_FD * sizeof(int));" and essentially 'left it inside' CMSG_DATA(cmsg)? Would there be some descriptor with an unknown number sitting out there? Had I not copied it out, I would have never seen it was essentially yet another descriptor number.
Related
In MINIX 3.2.1, I want to create a new system call in VFS server which will be given a filename as a parameter and will print this certain file's inode number.
So in order to retrieve the inode of the file by its name I want to use the default system call:
int stat(char *name,struct stat *buffer)
http://minix1.woodhull.com/manpages/man2/stat.2.html
in the body of my new system call handler which is
int mycall_1(void);
inside `/usr/src/servers/vfs/misc.c
But when I test the new system call, at the point where the stat system call should be invoked, it actually won't and instead it's printing the message:
sys_call: ipc mask denied SENDREC from 1 to 1
After some research, I found that this possibly happens because the VFS server tries to send a message to itself, as stat is actually implemented inside VFS server, and so ipc mask denied this sendrec() call. So I have to edit some configuration file in order to give the right permission for this communication to happen.
But I'm not sure if what I have understood is right and also do not know which file should Ι edit to give the appropriate permissions. So, if someone could enlighten me on this issue, I would be grateful.
Thanks in advance.
You understood it correctly. But the solution is not to continue "fixing the permissions" which are here just to prevent to shot yourself in the foot: it would only allow the system to more badly brocken.
Rather, you need to perform the steps that VFS do when it services a STAT request, starting after it cracked the message.
Yo, I've written a server with a simple protocol: the client sends a line, the server sends a line back in response, repeat. To prevent a client from filling Tcl's output buffer by sending lots of lines but not accepting data back, can I just check chan pending output instead of using the writable fileevent?
proc respond {stream msg} {
if {[chan pending output $stream] <= 1024} {
puts $stream $msg
} else {
#close $stream
}
}
For output, chan pending output will correctly describe the number of bytes waiting in the output queue. Normally, that value will be bounded by the -buffersize value that you chan configure (or fconfigure) it to have.
That value will only be exceeded when the channel is non-blocking; with a blocking channel, when the value would go over it, instead there's a blocking write to the underlying device (socket, pipe, file, serial line, whatever) so by the time you could see that it went over, it's back under the limit again.
But if you're using non-blocking channels, you really should use chan event (or fileevent). Luckily for the actual writes, Tcl will actually do this for you automatically; the single most useful thing you could want from a writable event is already there. In practice, the most common actual use of a writable event is in detecting when an async socket connection becomes ready for service.
So what you are doing will work, but you'll have to think carefully about what to do if the output buffer is “getting full”; the idea that a message can need to be delayed is a place where a simple abstraction tends to become leaky. With 8.6's coroutines, you could (probably) do a transparent suspend or something like that, but getting that sort of thing right can take a little thought. (For example, a GUI client might need to show a busy indicator and put things into a state where the user can't enter more requests.)
I wrote the file transferring code as follows:
val fileContent: Enumerator[Array[Byte]] = Enumerator.fromFile(file)
val size = file.length.toString
file.delete // (1) THE FILE IS TEMPORARY SO SHOULD BE DELETED
SimpleResult(
header = ResponseHeader(200, Map(CONTENT_LENGTH -> size, CONTENT_TYPE -> "application/pdf")),
body = fileContent)
This code works successfully, even if the file size is rather large (2.6 MB),
but I'm confused because my understanding about .fromFile() is a wrapper of fromCallBack() and SimpleResult actually reads the file buffred,but the file is deleted before that.
MY easy assumption is that java.io.File.delete waits until the file gets released after the chunk reading completed, but I have never heard of that process of Java File class,
Or .fromFile() has already loaded all lines to the Enumerator instance, but it's against the fromCallBack() spec, I think.
Does anybody knows about this mechanism?
I'm guessing you are on some kind of a Unix system, OSX or Linux for example.
On a Unix:y system you can actually delete a file that is open, any filesystem entry is just a link to the actual file, and so is a file handle which you get when you open a file. The file contents won't become unreachable /deleted until the last link to it is removed.
So: it will no longer show up in the filesystem after you do file.delete but you can still read it using the InputStream that was created in Enumerator.fromFile(file) since that created a file handle. (On Linux you actually can find it through the special /proc filesystem which, among other things, contains the filehandles of each running process)
On windows I think you will get an error though, so if it is to run on multiple platforms you should probably check test your webapp on windows as well.
I want to implement the already defined system calls in PintOS ( halt(), create()...etc defined in pintos/src/lib/user/syscall.c ). The current system call handler in pintos/src/userprog/syscall.c does not do anything. How do I make a process that makes system calls. Further I need to myself add a few system calls. How do I proceed in that too. But first I need to implement the existing system calls.
The default implementation in pintos terminates the calling process.
goto this link.There is explanation on where to modify the code to implement the system calls.
The "src/examples" directory contains a few sample user programs.
The "Makefile" in this directory compiles the provided examples, and you can edit it compile your own programs as well.
This program/process when run will inturn make a system call.
Use gdb to follow the execution of one such program a simple printf statement will eventually call write system call to STDOUT file.
The link given also has pointers on how to run pintos on gdb, my guess is you are using either bochs or qemu.In any case just run the gdb once with a simple hello world program running on pintos.
This will give u an idea of how the system call is made.
static void
syscall_handler (struct intr_frame *f)// UNUSED)
{
int *p=f->esp;
switch(*p)
case *p=SYS_CREATE // NUMBER # DEFINED
const char *name=*(p+1); //extract the filename
if(name==NULL||*name==NULL)
exit(-1);
off_t size=(int32_t)*(p+2);//extract file size
f->eax=filesys_create(name,size,_FILE); //call filesys_create
//eax will have the return value
}
This is pseudo code for sys_create .. all file system related system call are very trivial,
Filesys realted system calls like open read write close needs you to translate file to their corresponding fd (file descriptor). You need to add a file table for each process to keep track this, this can either be preprocess data or a global data.(UR choice),
case (*p==SYS_WRITE)
{
// printf("wite syscall\n");
char *buffer=*(p+2);
unsigned size=*(p+3);
int fd=*(p+1);
// getiing the fd of specified file
struct file *fil= thread_current()->fdtable[fd];/ my per thread fdtable
if(fd==1) goto here;
if(is_directory(fil->inode)){
exit(-1);
goto done;
}
here:
if(buffer>=PHYS_BASE)exit(-1);
if(fd<0||fd>=128){exit(-1);}
if(fd==0){exit(-1);} // writing to STDIN
if(fd==1) //writing to STDOUT
{
int a=(int)size;
while(a>=100)
{
putbuf(buffer,100);
buffer=buffer+100;
a-=100;
}
putbuf(buffer,a);
f->eax=(int)size;
}
else
if(thread_current()->fdtable[fd]==NULL)
{f->eax=-1;}
else
{
f->eax=file_write(thread_current()->fdtable[fd],buffer,(off_t)size);
}
done: ;
}//printf("write");} /* Write to a file. */
Open - adds anew entry to fdtable and return the fd number u give to the file,
close - remove that entry from fd table
read - similar to write.
The process_create ,wait are not simple to implement...
Cheers :)
I'm writing a little VOIP app like Skype, which works quite good right now, but I've run into a very strange problem.
In one thread, I'm calling within a while(true) loop the winsock recv() function twice per run to get data from a socket.
The first call gets 2 bytes which will be casted into a (short) while the second call gets the rest of the message which looks like:
Complete Message: [2 Byte Header | Message, length determined by the 2Byte Header]
These packets are round about 49/sec which will be round about 3000bytes/sec.
The content of these packets is audio-data that gets converted into wave.
With ioctlsocket() I determine wether there is some data on the socket or not at each "message" I receive (2byte+data). If there's something on the socket right after I received a message within the while(true) loop of the thread, the message will be received, but thrown away to work against upstacking latency.
This concept works very well, but here's the problem:
While my VOIP program is running and when I parallely download (e.g. via browser) a file, there always gets too much data stacked on the socket, because while downloading, the recv() loop seems actually to slow down. This happens in every download/upload situation besides the actual voip up/download.
I don't know where this behaviour comes from, but when I actually cancel every up/download besides the voip traffic of my application, my apps works again perfectly.
If the program runs perfectly, the ioctlsocket() function writes 0 into the bytesLeft var, defined within the class where the receive function comes from.
Does somebody know where this comes from? I'll attach my receive function down below:
std::string D_SOCKETS::receive_message(){
recv(ClientSocket,(char*)&val,sizeof(val),MSG_WAITALL);
receivedBytes = recv(ClientSocket,buffer,val,MSG_WAITALL);
if (receivedBytes != val){
printf("SHORT: %d PAKET: %d ERROR: %d",val,receivedBytes,WSAGetLastError());
exit(128);
}
ioctlsocket(ClientSocket,FIONREAD,&bytesLeft);
cout<<"Bytes left on the Socket:"<<bytesLeft<<endl;
if(bytesLeft>20)
{
// message gets received, but ignored/thrown away to throw away
return std::string();
}
else
return std::string(buffer,receivedBytes);}
There is no need to use ioctlsocket() to discard data. That would indicate a bug in your protocol design. Assuming you are using TCP (you did not say), there should not be any left over data if your 2byte header is always accurate. After reading the 2byte header and then reading the specified number of bytes, the next bytes you receive after that constitute your next message and should not be discarded simply because it exists.
The fact that ioctlsocket() reports more bytes available means that you are receiving messages faster than you are reading them from the socket. Make your reading code run faster, don't throw away good data due to your slowness.
Your reading model is not efficient. Instead of reading 2 bytes, then X bytes, then 2 bytes, and so on, you should instead use a larger buffer to read more raw data from the socket at one time (use ioctlsocket() to know how many bytes are available, and then read at least that many bytes at one time and append them to the end of your buffer), and then parse as many complete messages are in the buffer before then reading more raw data from the socket again. The more data you can read at a time, the faster you can receive data.
To help speed up the code even more, don't process the messages inside the loop directly, either. Do the processing in another thread instead. Have the reading loop put complete messages in a queue and go back to reading, and then have a processing thread pull from the queue whenever messages are available for processing.