Currently for my college project, I am trying to implement FCFS and Priority scheduling algorithms for xv6. I am done with the priority one and now trying to make FCFS work out. The following is the modification i did to the code:
void
scheduler(void)
{
struct proc *p = 0;
struct cpu *c = mycpu();
c->proc = 0;
for(;;)
{
// Enable interrupts on this processor.
sti();
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
struct proc *minP = 0;
if(p->state != RUNNABLE)
continue;
// ignore init and sh processes from FCFS
if(p->pid > 1)
{
if (minP != 0){
// here I find the process with the lowest creation time (the first one that was created)
if(p->ctime < minP->ctime)
minP = p;
}
else
minP = p;
}
// If I found the process which I created first and it is runnable I run it
//(in the real FCFS I should not check if it is runnable, but for testing purposes I have to make this control, otherwise every time I launch
// a process which does I/0 operation (every simple command) everything will be blocked
if(minP != 0 && p->state == RUNNABLE)
p = minP;
if(p != 0)
{
// Switch to chosen process. It is the process's job
// to release ptable.lock and then reacquire it
// before jumping back to us.
c->proc = p;
switchuvm(p);
p->state = RUNNING;
swtch(&(c->scheduler), p->context);
switchkvm();
// Process is done running for now.
// It should have changed its p->state before coming back.
c->proc = 0;
}
}
release(&ptable.lock);
}
}
Now, I would like to ask is that when I run two dummy process (doing with the convention, foo.c to produce children processes to do useless calculations that consume time) each producing a child, why is it that I am still able to run ps?
Technically, each of the 2 available CPUs must be occupied running the two dummy process correct?
Additionally, I set the creation time as Priority using the algoirthm i wrote for the Priority scheduling. Turns out, after creation of the two processes, I cannot run anything, meaning both the CPUs are in use right now.
I think you've made two mistakes:
the process context is inside your for loop, it should be after:
schedule()
{
// for ever
for(;;)
{
// select process to run
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
...
}
// run proc
if (p != 0)
{
...
}
}
You've made a little mistake in minP selection:
if(minP != 0 && p->state == RUNNABLE)
p = minP;
should be
if(minP != 0 && minP->state == RUNNABLE)
p = minP;
but since minP's state is necessary RUNNABLE, and that you test that it's not null before running it, you could write
p = minP;
So you're corrected code could be:
void
scheduler(void)
{
struct proc *p = 0;
struct cpu *c = mycpu();
c->proc = 0;
for(;;)
{
sti();
struct proc *minP = 0;
// Loop over process table looking for process to run.
acquire(&ptable.lock);
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
{
if(p->state != RUNNABLE)
continue;
// ignore init and sh processes from FCFS
if(p->pid > 1)
{
if (minP != 0) {
// here I find the process with the lowest creation time (the first one that was created)
if(p->ctime < minP->ctime)
minP = p;
}
else
minP = p;
}
}
p = minP;
release(&ptable.lock);
if(p != 0)
{
c->proc = p;
switchuvm(p);
p->state = RUNNING;
swtch(&(c->scheduler), p->context);
switchkvm();
c->proc = 0;
}
}
}
Related
I have frames that are delimited by bytes to start and stop the frame (they do not appear in the stream).
I read a chunk from disk or network socket, i then need to pass to a deserializer but only after I have de-framed the packet first.
Frames may span multiple chunks that have been read, note how frame 3 is split across array 1 and array 2.
Rather than reinvent the wheel for this common problem, do any github or similar projects exist?
I am investigating ReadOnlySequenceSegment<T> from https://www.codemag.com/article/1807051/Introducing-.NET-Core-2.1-Flagship-Types-Span-T-and-Memory-T and will post updates as I work out the requirements.
Update
Further to Stephen Cleary link (thank you!!) to https://github.com/davidfowl/TcpEcho/blob/master/src/Server/Program.cs I have the below.
My data is json, so unlike the original question the delimiter tokens will appear in the stream. Therefore I have to count the array delimitator and only declare a frame when i have found the outermost [ and ] characters.
The below code works, and less manual copies done (not sure if still done behind the scenes - code is quite neater using David Fowl approach).
However I am casting to array instead of using buffer.PositionOf((byte)'[') since I was unable to see how I could call the PositionOf with an offset applied (i.e. scan deeper into the frame past previously found delimiter tokens).
Am i using/butchering the library in a brute force way, or is the below good to go with the array cast?
class Program
{
static async Task Main(string[] args)
{
using var stream = File.Open(args[0], FileMode.Open);
var reader = PipeReader.Create(stream);
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
while (TryDeframe(ref buffer, out ReadOnlySequence<byte> line))
{
// Process the line.
var str = System.Text.Encoding.UTF8.GetString(line.ToArray());
Console.WriteLine(str);
}
// Tell the PipeReader how much of the buffer has been consumed.
reader.AdvanceTo(buffer.Start, buffer.End);
// Stop reading if there's no more data coming.
if (result.IsCompleted)
{
break;
}
}
// Mark the PipeReader as complete.
await reader.CompleteAsync();
}
private static bool TryDeframe(ref ReadOnlySequence<byte> buffer, out ReadOnlySequence<byte> frame)
{
int frameCount = 0;
int start = -1;
int end = -1;
var bytes = buffer.ToArray();
for (var i = 0; i < bytes.Length; i++)
{
var b = bytes[i];
if (b == (byte)'[')
{
if (start == -1)
start = i;
frameCount++;
}
else if (b == (byte)']')
{
frameCount--;
if (frameCount == 0)
{
end = i;
break;
}
}
}
if (start == -1 || end == -1) // no frame found
{
frame = default;
return false;
}
frame = buffer.Slice(start, end+1);
buffer = buffer.Slice(frame.Length);
return true;
}
}
do any github or similar projects exist?
David Fowler has an echo server that uses Pipelines to implement delimited frames.
I tried to do a Parameterized constructor for a linked list my program is about to implement a queue by using a liked list so i want to do a parameterized constructor like Queue(int value , int size) and it dose not run or doing a list
this is my code for this problem
Queue(int value,int _size)
{
for(int i = 0; i < _size; ++i)
{
Node* temp = new Node;
temp->data = value;
temp->next = nullptr;
if(head == nullptr)
{
head = tail = temp;
}
else
{
tail->next = temp;
tail = temp;
}
}
}
i expected that the result is to fill the lest by value times size like if i run this function Queue x(20,3) the linked list should be
20 20 20
Since that this is a constructor, The head and tail are not properly initialized to use them. I would suggest adding head = tail = nullptr just before the loop and see what happens.
Follow this code after your node creation. I hope this will work. And do use i++ instead of ++i, as the later will make the loop for size-1 times.
if(head == NULL)
head = temp;
else{
Node *x;
x= head;
while(x->next != NULL)
x = x->next;
x->next = temp;
}
I'm learning about osdev and looking up xv6 code, currently - the console code in particular. Basically, I don't get how the console launches a process.
in console.c there is a function:
void consoleintr(int (*getc)(void)) {
....
while((c = getc()) >= 0) {
switch(c) {
....
default:
....
if(c == '\n' || c == C('D') || input.rightmost == input.r + INPUT_BUF) {
wakeup(&input.r);
}
}
}
So I get it, when the line ends (or the length of the buffer exceeds maximum), it launches wakeup(&input.r)
Then there is this in proc.c:
// Wake up all processes sleeping on chan.
// The ptable lock must be held.
static void wakeup1(void *chan)
{
struct proc *p;
for(p = ptable.proc; p < &ptable.proc[NPROC]; p++)
if(p->state == SLEEPING && p->chan == chan)
p->state = RUNNABLE;
}
// Wake up all processes sleeping on chan.
void wakeup(void *chan)
{
acquire(&ptable.lock);
wakeup1(chan);
release(&ptable.lock);
}
What is happening here? Why is it comparing address of a console buffer and proc's chan? What is this chan?
It is for processes who waiting (sleeps) for console input. See here:
235 int
236 consoleread(struct inode *ip, char *dst, int n)
...
251 sleep(&input.r, &cons.lock);
The code you have mentioned wakeups this sleeping processes, because data have came from console and is available now for processing.
chan - is a channel. You can wait (sleep) for different things. This channel is for console input data.
PIC 24F as an I2C slave has lock up issues sending multiple data bytes to the master (MASTER READ).
The PIC (specifically 24FJ128GB202) as an I2C slave receiving data (MASTER WRITE) works perfectly, passing all unit tests. (address only, address and register, address register single data and multiple data auto increment)
My slave code can handle address-only and reading a single byte. It hangs on multiple reads made from the master (auto increment).
I used the Microchip app note as the basis of my code, as well as looking at the Code Composer generated code.
Initialization:
void I2C1_Initialize(void)
{
I2C1ADD = I2C1_SLAVE_ADDRESS;
I2C1MSK = I2C1_SLAVE_MASK;
I2C1CONL = 0x8200;
I2C1CONH = 0x0000;
I2C1STAT = 0x0000;
// clear the master interrupt flag
IFS1bits.SI2C1IF = 0;
// enable the master interrupt
IEC1bits.SI2C1IE = 1;
}
Interrupt Write:
// clear the interrupt
_SI2C1IF = 0;
// Write from Master to Slave - Address
// S = 1, D_A = 0, R_W = 0, BF = 1
if ( (I2C1STATbits.S == 1) && (I2C1STATbits.D_A == 0) && (I2C1STATbits.R_W == 0) && (I2C1STATbits.RBF == 1) )
{
myI2CAd = I2C1RCV;
mySTATE = 0;
}
// Write from Master to Slave - Data
// S = 1, D_A = 1, R_W = 0, BF = 1
if ( (I2C1STATbits.S == 1) && (I2C1STATbits.D_A == 1) && (I2C1STATbits.R_W == 0) && (I2C1STATbits.RBF == 1))
{
mySTATE++;
if(mySTATE == 1)
{
myREGISTER = I2C1RCV;
// limit register to MAX
if(myREGISTER > I2CMAXREGISTER) myREGISTER = I2CMAXREGISTER;
}
if(mySTATE == 2)
{
myDATA = I2C1RCV;
shelfregister[myREGISTER] = myDATA;
}
if(mySTATE > 2)
{
myDATA = I2C1RCV;
// limit register to MAX
if(myREGISTER < I2CMAXREGISTER) myREGISTER++;
shelfregister[myREGISTER] = myDATA;
}
}
Interrupt Read:
// Read from Slave to Master - Address
// S = 1, D_A = 0, R_W = 1, BF = 0
if ( (I2C1STATbits.S == 1) && (I2C1STATbits.D_A == 0) && (I2C1STATbits.R_W == 1) && (I2C1STATbits.TBF == 0) )
{
myI2CAd = I2C1RCV;
I2C1TRN = shelfregister[myREGISTER];
I2C1CONLbits.SCLREL = 1;
if(myREGISTER < I2CMAXREGISTER) myREGISTER++;
}
The code generated by the Code Composer for slave operation is quite similar and has the same issue.
My main question is how to handle multiple reads from the master, not just the single byte that is handled in this read function. It should involve checking I2CSTATbits.ACKSTAT, but the timing of that bit is unclear to me since it should happen 9 clocks after the byte is in the register, but there is no bit to indicate that point in time. Any guidance appreciated.
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/sem.h>
#include<sys/ipc.h>
int sem_id;
void update_file(int number)
{
struct sembuf sem_op;
FILE* file;
printf("Inside Update Process\n");
/* wait on the semaphore, unless it's value is non-negative. */
sem_op.sem_num = 0;
sem_op.sem_op = -1; /* <-- Amount by which the value of the semaphore is to be decreased */
sem_op.sem_flg = 0;
semop(sem_id, &sem_op, 1);
/* we "locked" the semaphore, and are assured exclusive access to file. */
/* manipulate the file in some way. for example, write a number into it. */
file = fopen("file.txt", "a+");
if (file) {
fprintf(file, " \n%d\n", number);
fclose(file);
}
/* finally, signal the semaphore - increase its value by one. */
sem_op.sem_num = 0;
sem_op.sem_op = 1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
}
void write_file(char* contents)
{
printf("Inside Write Process\n");
struct sembuf sem_op;
sem_op.sem_num = 0;
sem_op.sem_op = -1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
FILE *file = fopen("file.txt","w");
if(file)
{
fprintf(file,contents);
fclose(file);
}
sem_op.sem_num = 0;
sem_op.sem_op = 1;
sem_op.sem_flg = 0;
semop( sem_id, &sem_op, 1);
}
int main()
{
//key_t key = ftok("file.txt",'E');
sem_id = semget( IPC_PRIVATE, 1, 0600 | IPC_CREAT);
/*here 100 is any arbit number to be assigned as the key of the
semaphore,1 is the number of semaphores in the semaphore set, */
if(sem_id == -1)
{
perror("main : semget");
exit(1);
}
int rc = semctl( sem_id, 0, SETVAL, 1);
pid_t u = fork();
if(u == 0)
{
update_file(100);
exit(0);
}
else
{
wait();
}
pid_t w = fork();
if(w == 0)
{
write_file("Hello!!");
exit(0);
}
else
{
wait();
}
}
If I run the above code as a c code, the write_file() function is called after the update_file () function
Whereas if I run the same code as a c++ code, the order of execution is reverse... why is it so??
Just some suggestions, but it looks to me like it could be caused by a combination of things:
The wait() call is supposed to take a pointer argument (that can
be NULL). Compiler should have caught this, but you must be picking
up another definition somewhere that permits your syntax. You are
also missing an include for sys/wait.h. This might be why the
compiler isn't complaining as I'd expect it to.
Depending on your machine/OS configuration the fork'd process may
not get to run until after the parent yields. Assuming the "wait()"
you are calling isn't working the way we would be expecting, it is
possible for the parent to execute completely before the children
get to run.
Unfortunately, I wasn't able to duplicate the same temporal behavior. However, when I generated assembly files for each of the two cases (C & C++), I noticed that the C++ version is missing the "wait" system call, but the C version is as I would expect. To me, this suggests that somewhere in the C++ headers this special version without an argument is being #defined out of the code. This difference could be the reason behind the behavior you are seeing.
In a nutshell... add the #include, and change your wait calls to "wait(0)"