How does wakeup(void *chan) works in xv6? - operating-system

I'm learning about osdev and looking up xv6 code, currently - the console code in particular. Basically, I don't get how the console launches a process.
in console.c there is a function:
void consoleintr(int (*getc)(void)) {
....
while((c = getc()) >= 0) {
switch(c) {
....
default:
....
if(c == '\n' || c == C('D') || input.rightmost == input.r + INPUT_BUF) {
wakeup(&input.r);
}
}
}
So I get it, when the line ends (or the length of the buffer exceeds maximum), it launches wakeup(&input.r)
Then there is this in proc.c:
// Wake up all processes sleeping on chan.
// The ptable lock must be held.
static void wakeup1(void *chan)
{
struct proc *p;
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
if(p->state == SLEEPING && p->chan == chan)
p->state = RUNNABLE;
}
// Wake up all processes sleeping on chan.
void wakeup(void *chan)
{
acquire(&ptable.lock);
wakeup1(chan);
release(&ptable.lock);
}
What is happening here? Why is it comparing address of a console buffer and proc's chan? What is this chan?

It is for processes who waiting (sleeps) for console input. See here:
235 int
236 consoleread(struct inode *ip, char *dst, int n)
...
251 sleep(&input.r, &cons.lock);
The code you have mentioned wakeups this sleeping processes, because data have came from console and is available now for processing.
chan - is a channel. You can wait (sleep) for different things. This channel is for console input data.

Related

Decoding delimited frames from byte arrays

I have frames that are delimited by bytes to start and stop the frame (they do not appear in the stream).
I read a chunk from disk or network socket, i then need to pass to a deserializer but only after I have de-framed the packet first.
Frames may span multiple chunks that have been read, note how frame 3 is split across array 1 and array 2.
Rather than reinvent the wheel for this common problem, do any github or similar projects exist?
I am investigating ReadOnlySequenceSegment<T> from https://www.codemag.com/article/1807051/Introducing-.NET-Core-2.1-Flagship-Types-Span-T-and-Memory-T and will post updates as I work out the requirements.
Update
Further to Stephen Cleary link (thank you!!) to https://github.com/davidfowl/TcpEcho/blob/master/src/Server/Program.cs I have the below.
My data is json, so unlike the original question the delimiter tokens will appear in the stream. Therefore I have to count the array delimitator and only declare a frame when i have found the outermost [ and ] characters.
The below code works, and less manual copies done (not sure if still done behind the scenes - code is quite neater using David Fowl approach).
However I am casting to array instead of using buffer.PositionOf((byte)'[') since I was unable to see how I could call the PositionOf with an offset applied (i.e. scan deeper into the frame past previously found delimiter tokens).
Am i using/butchering the library in a brute force way, or is the below good to go with the array cast?
class Program
{
static async Task Main(string[] args)
{
using var stream = File.Open(args[0], FileMode.Open);
var reader = PipeReader.Create(stream);
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
while (TryDeframe(ref buffer, out ReadOnlySequence<byte> line))
{
// Process the line.
var str = System.Text.Encoding.UTF8.GetString(line.ToArray());
Console.WriteLine(str);
}
// Tell the PipeReader how much of the buffer has been consumed.
reader.AdvanceTo(buffer.Start, buffer.End);
// Stop reading if there's no more data coming.
if (result.IsCompleted)
{
break;
}
}
// Mark the PipeReader as complete.
await reader.CompleteAsync();
}
private static bool TryDeframe(ref ReadOnlySequence<byte> buffer, out ReadOnlySequence<byte> frame)
{
int frameCount = 0;
int start = -1;
int end = -1;
var bytes = buffer.ToArray();
for (var i = 0; i < bytes.Length; i++)
{
var b = bytes[i];
if (b == (byte)'[')
{
if (start == -1)
start = i;
frameCount++;
}
else if (b == (byte)']')
{
frameCount--;
if (frameCount == 0)
{
end = i;
break;
}
}
}
if (start == -1 || end == -1) // no frame found
{
frame = default;
return false;
}
frame = buffer.Slice(start, end+1);
buffer = buffer.Slice(frame.Length);
return true;
}
}
do any github or similar projects exist?
David Fowler has an echo server that uses Pipelines to implement delimited frames.

How do I read all the data that should come with UART in STM32F4?

I am currently working on USART with STM32 and it was working while I'm using usart interrupt. It was filling the rxBuff while I am sending with HTERM via USB but it is not filling while I am trying in the main block. It just takes first character of the transmitted data. For example, I tried to send hello, it just takes "h" into rxBuff and stops. When I try to send it again, this time rxBuff becomes [h,h] which means it only takes the first character.
Working:
void USART1_IRQHandler(void) {
rxBuff[i++] = USART_ReceiveData(USART1);
if (i > RX_BUFFERSIZE ){
i = 0;
}
USART_ClearITPendingBit(USART1, USART_IT_RXNE);
}
Not working :
int main(void){
while(1){
if ( USART_GetFlagStatus(USART1, USART_FLAG_RXNE) == SET ){
rxBuff[i] = USART_ReceiveData(USART1);
i++;
}
if (i > RX_BUFFERSIZE ){
i = 0;
}
}
}

FCFS implemention for xv6

Currently for my college project, I am trying to implement FCFS and Priority scheduling algorithms for xv6. I am done with the priority one and now trying to make FCFS work out. The following is the modification i did to the code:
void
scheduler(void)
{
struct proc *p = 0;
struct cpu *c = mycpu();
c->proc = 0;
for(;;)
{
// Enable interrupts on this processor.
sti();
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
struct proc *minP = 0;
if(p->state != RUNNABLE)
continue;
// ignore init and sh processes from FCFS
if(p->pid > 1)
{
if (minP != 0){
// here I find the process with the lowest creation time (the first one that was created)
if(p->ctime < minP->ctime)
minP = p;
}
else
minP = p;
}
// If I found the process which I created first and it is runnable I run it
//(in the real FCFS I should not check if it is runnable, but for testing purposes I have to make this control, otherwise every time I launch
// a process which does I/0 operation (every simple command) everything will be blocked
if(minP != 0 && p->state == RUNNABLE)
p = minP;
if(p != 0)
{
// Switch to chosen process. It is the process's job
// to release ptable.lock and then reacquire it
// before jumping back to us.
c->proc = p;
switchuvm(p);
p->state = RUNNING;
swtch(&(c->scheduler), p->context);
switchkvm();
// Process is done running for now.
// It should have changed its p->state before coming back.
c->proc = 0;
}
}
release(&ptable.lock);
}
}
Now, I would like to ask is that when I run two dummy process (doing with the convention, foo.c to produce children processes to do useless calculations that consume time) each producing a child, why is it that I am still able to run ps?
Technically, each of the 2 available CPUs must be occupied running the two dummy process correct?
Additionally, I set the creation time as Priority using the algoirthm i wrote for the Priority scheduling. Turns out, after creation of the two processes, I cannot run anything, meaning both the CPUs are in use right now.
I think you've made two mistakes:
the process context is inside your for loop, it should be after:
schedule()
{
// for ever
for(;;)
{
// select process to run
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
...
}
// run proc
if (p != 0)
{
...
}
}
You've made a little mistake in minP selection:
if(minP != 0 && p->state == RUNNABLE)
p = minP;
should be
if(minP != 0 && minP->state == RUNNABLE)
p = minP;
but since minP's state is necessary RUNNABLE, and that you test that it's not null before running it, you could write
p = minP;
So you're corrected code could be:
void
scheduler(void)
{
struct proc *p = 0;
struct cpu *c = mycpu();
c->proc = 0;
for(;;)
{
sti();
struct proc *minP = 0;
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
if(p->state != RUNNABLE)
continue;
// ignore init and sh processes from FCFS
if(p->pid > 1)
{
if (minP != 0) {
// here I find the process with the lowest creation time (the first one that was created)
if(p->ctime < minP->ctime)
minP = p;
}
else
minP = p;
}
}
p = minP;
release(&ptable.lock);
if(p != 0)
{
c->proc = p;
switchuvm(p);
p->state = RUNNING;
swtch(&(c->scheduler), p->context);
switchkvm();
c->proc = 0;
}
}
}

read() hangs while watching for event0 using inotify

I am in need to use /dev/input/event0 to be watched for key events.I have used inotify_add_watch(), but the read() call hangs.But If I cat /dev/input/event0 I can see some events.Please let me know what is wrong.Below is my code snippet
/creating the INOTIFY instance/
fd = inotify_init();
/*checking for error*/
if ( fd < 0 ) {
perror( "inotify_init" );
}
/*adding the /dev/input/event0 to watch list.*/
wd = inotify_add_watch(fd, "/dev/input/event0", IN_ALL_EVENTS);
if (wd < 0){
perror("inotify_add_watch");
exit(-1);
}
for (;;) {
length = read(fd, buffer,EVENT_BUF_LEN);
printf("length = %d",length);
if (length == 0)
perror("read() from inotify fd returned 0!");
if (length < 0)
perror("read");
printf("Read %ld bytes from inotify fd\n", (long) numRead);
You haven't explained why you think you need to use inotify for this.
I'm assuming that you just want to programmatically test whether an event is ready.
You can do something like:
int fd = open("/dev/input/event0", O_RDONLY|O_NONBLOCK);
struct pollfd pfd; // see man 2 poll
pfd.fd = fd;
pfd.events = POLLIN;
if (poll(&pfd, 1, &ts, 1000 /* milliseconds */) > 0) {
// reading from fd now will not block
}
This will wait for up to 1 second (1000 milliseconds) for an event to be ready to read. You can change the timeout to whatever you need. You can also use 0 to test whether there is data available immediately without waiting.
read() is a blocking function and waits for file update, unless specifically mentioned.
If you want it to be non-blocking, simply add a flag.
file_descriptor = inotify_init1(IN_NONBLOCK);
https://linux.die.net/man/2/inotify_init1

using select() to detect connection close

As described in other posts, I'm trying to use select() in socket programming to detect closed connections. See the following code which tries to detect closed connections by select() and a following check on whether recv() returns 0. Before the while loop starts, there are two established TCP connections already. In our controlled experiment, the first connection always closes after about 15 seconds and the second about 30 seconds.
Theoretically (as described by others), when they get closed, select() should return (twice in our case) which allows us to detect both close events. The problem we face is that the select() now only returns once and never again, which allows us to detect ONLY the first connection close event. If the code for one IP it works fine but not for two or more connections.
Anyone has any ideas or suggestions? Thanks.
while (1)
{
printf("Waiting on select()...\n");
if ((result = select(max + 1, & readset, NULL, NULL, NULL)) < 0)
{
printf("select() failed");
break;
}
if (result > 0)
{
i=0;
while(i<max+1)
{
if (FD_ISSET(i, &readset))
{
result = recv(i, buffer, sizeof(buffer), 0);
if (result == 0)
{
close(i);
FD_CLR(i, &readset);
if (i == max)
{
max -= 1;
}
}
}
i++;
}
}
}
select() modifies readset to remove socket(s) that are not readable. Every time you call select(), you have to reset and fill readset with your latest list of active sockets that you want to test, eg:
fd_set readset;
int max;
while (1)
{
FD_ZERO(&readset);
max = -1;
// populate readset from list of active sockets...
// set max according...
printf("Waiting on select()...\n");
result = select(max + 1, &readset, NULL, NULL, NULL);
if (result < 0)
{
printf("select() failed");
break;
}
if (result == 0)
continue;
for (int i = 0; i <= max; ++i)
{
if (FD_ISSET(i, &readset))
{
result = recv(i, buffer, sizeof(buffer), 0);
if (result <= 0)
{
close(i);
// remove i from list of active sockets...
}
}
}
}