system calls and the internal flow - operating-system

Trying to understand the exact sequence of events in terms of flow in case of Linux:
User application program makes a call to a system call.
This results in execution of code which results in triggering of a software exception (which code is this? It's understandable if I called a glibc API, but what if my program is calling a system call directly). Is there some piece of code that regular system calls have which triggers this exception?
Once the exception is triggered -- user application processes context needs to be preserved and user application I presume will be in pend state(assuming its a waited system call that was invoked).
Exception handler then runs and picks suitable handler based on system call invoked(how is this system call number passed to exception handler from this exception triggering code?)
An exception handler runs(let's say a read call for example and has to return read data to use a process which triggered its invocation - this I assume would mean the exception triggering code has also passed a pointer to buffer to which read data needs to be copied over to)
Once the exception handler is done, the control now has to return to the application which would now be pending system call.
Does this flow look alright > Is there some sample code for each of these steps beyond user application?

Related

Should temporary registers be saved before issuing an environment call?

In the following RISC-V assembly code:
...
#Using some temporary (t) registers
...
addi a7,zero,1 #Printint system call code
addi a0,zero,100
ecall
...
Should any temporary (t) registers be saved to the stack before using ecall? When ecall is used, an exception occurs, kernel mode is on and code is executed from an exception handler. Some information is saved when exceptions occur, such as EPC and CAUSE , but what about the temporary registers? Environment calls are considered not to be like procedures for safety reasons, but they look like it. Do the procedure calling conventions still apply in this case?
You are correct that the hardware captures some information, such as the epc (if it didn't the interrupted pc would be lost during transfer of control to the exception handler).
However that's all the hardware does — the rest is up to software.  Here, RARS is providing the exception handler for ecall.
RARS documentation states (from the help section on syscalls, which is what these are called on MIPS (and RARS came from MARS)):
Register contents are not affected by a system call, except for result registers as specified in the table below.
And below this quote in the help is ecall function code table labeled "Table of Available Services", in which 1 is PrintInt.
Should any temporary (t) registers be saved to the stack before using ecall?
No, it is not necessary, the $t registers will be unaffected by an ecall.
This also means that we can add ecalls to do printf-style debugging without concern for preserving the $t registers, which is nice.  However, keeping that in mind, we might generally avoid $a0 and $a7 for our own variables, since putting in an ecall for debugging will need those registers.
In addition, you can write your own exception handler, and it would not have to follow any particular conventions for parameter passing or even register preservation.

How to ensure the order of user callback function in OpenCL?

I am working on OpenCL implementation wherein the host side particular function has to call every time the clEnqueueReadBuffer is done executing.
I am calling the kernels in a loop. It will look like below in an ordered queue.
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) ->
clEnqueueNDRangeKernel() -> clEnqueueReadBuffer(&Event) .......
I have used clSetEventCall() to register Events in each read command to execute a callback function. I have observed that, though the command queue is an in-order queue, the order of the callback function does not execute in-order.
Also, in OpenCL 1.2, it has a mention as below.
The order in which the registered user callback functions are called
is undefined. There is no guarantee that the callback functions
registered for various execution status values for an event will be
called in the exact order that the execution status of a command
changes.
Can anyone give me a solution? I want to execute the callback function in order.
A simple solution could be to subscribe the same callback function to both events. In the callback code, you can check the status of each relevant event and perform the operation you want accordingly.
Note that on some implementations, the driver will batch multiple commands for execution.
The immediate effect is that that multiple events will be signaled "at once" even though the associated commands complete at a different time.
// event1 & event2 are likely to be signaled at once:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
Wheres:
// event1 is likely to be signaled before event2:
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event1);
clflush(queue);
clEnqueueNDRangeKernel();
clEnqueueReadBuffer(&event2);
clflush(queue);
I would also check on which exact thread the callbacks are invoked.
Is it the same thread each time? or a different one? If the implementation opens a new thread for this task, it might be wiser to open a single thread yourself and wait for events in the order that you wish.

protractor: what is the relationship between the control flow and javascript event loop?

I'm having a difficult time trying to understand how the control flow in protractor work in relation to how JS event loop works. Here is what I know so far:
Protractor control flow stores commands that return promises in a queue. The first command will be at the front of the queue and the last command will be at the back. No command will be executed until the command in front of it has its promise resolved.
JS event loop stores asynchronous task (callbacks to be specific). Callbacks are not executed until all functions in the stack have completed and the stack is empty. Before running each callback, there is a check on whether the stack is empty or not.
so lets take this code for example. The code is basically clicking a search button and a api request is made. Then after data is returned, it checks whether the field that stores the returned data exists.
elem('#searchButton').click(); //will execute a api call to retrieve data
browser.wait(ExpectedConditions.presenceOf(elem('#resultDataField'),3000));
expect(elem('#resultDataField').isPresent()).toBeTruthy();
So with this code, I'm able to get it to work. But I don't know how it does it. How is the event loop applied in this scenario?
The core of the ControlFlow implementation is in runEventLoop_ (in Selenium's promise.js implementation).
As I understand it, the ControlFlow registers a call to runEventLoop_ with the JS event loop (e.g., with a 0-second timeout or somesuch). The call to runEventLoop_ can be thought of as a single iteration of a normal event loop. It registers code to actually run a scheduled task (i.e., actually do the work you queued up during your it). Once that task completes or fails (e.g., by hooking its async promise callbacks) the next iteration of runEventLoop_ is scheduled (see the calls to scheduleEventLoop in runEventLoop_).
There is some complexity when a callback ends up registering new promises (those need to be "inserted" before the old next event, this is accomplished by creating a "nested" control flow. Mostly you should never have to know this.)

Why does `libpq` use polling rather than notification for data fetch?

I am reading libpq reference. It has both of sync and async methods. Bu I discovered something strange.
When I see PQsendQuery function, it seems to send a query and return immediately. And I expected a callback function to get notified, but there was no such thing and the manual says to poll for data availability.
I don't understand why async method is written in polling way. Anyway, as libp is the official client implementation, I believe there should be a good reason for this design. What is that? Or am I missing correct callback stuffs mentioned somewhere else?
In the execution model of a mono-threaded program, the execution flow can't be interrupted by data coming back from an asynchronous query, or more generally a network socket. Only signals (SIGTERM and friends) may interrupt the flow, but signals can't be hooked to data coming in.
That's why having a callback to get notified of incoming data is not possible. The piece of code in libpq that would be necessary to emit the callback would never run if your code doesn't call it. And if you have to call it, that defeats the whole point of a callback.
There are libraries like Qt that provide callbacks, but they're architectured from the ground up with a main loop that acts as an event processor. The user code is organized in callbacks and event-based processing of incoming data is possible. But in this case the library takes ownership of the execution flow, meaning its mainloop polls the data sources. That just shifts the responsibility to another piece of code outside of libpq.
This page is describing how I can get be notified for async result fetch.
http://www.postgresql.org/docs/9.3/static/libpq-events.html#LIBPQ-EVENTS-PROC
PGEVT_RESULTCREATE
The result creation event is fired in response to any query execution
function that generates a result, including PQgetResult. This event
will only be fired after the result has been created successfully.
typedef struct {
PGconn *conn;
PGresult *result; } PGEventResultCreate; When a PGEVT_RESULTCREATE event is received, the evtInfo pointer should be cast to a
PGEventResultCreate *. The conn is the connection used to generate the
result. This is the ideal place to initialize any instanceData that
needs to be associated with the result. If the event procedure fails,
the result will be cleared and the failure will be propagated. The
event procedure must not try to PQclear the result object for itself.
When returning a failure code, all cleanup must be performed as no
PGEVT_RESULTDESTROY event will be sent.

Suspending the workflow instance in the Fault Handler

I want to implement a solution in my workflows that will do the following :
In the workflow level I want to implement a Fault Handler that will suspend the workflow for any exception.
Then at sometime the instance will get a Resume() command .
What I want to implement that when the Resume() command received , the instance will execute again the activity that failed earlier ( and caused the exception ) and then continue to execute whatever he has to do .
What is my problem :
When suspended and then resumed inside the Fault Handler , the instance is just completes. The resume of course doesn't make the instance to return back to the execution,
since that in the Fault Handler , after the Suspend activity - I have nothing. So
obviously the execution of the workflow ends there.
I DO want to implement the Fault Handler in the workflow level and not to use a While+Sequence activity to wrap each activity in the workflow ( like described here:
Error Handling In Workflows ) since with my pretty heavy workflows - this will look like a hell.
It should be kinda generic handling..
Do you have any ideas ??
Thanks .
If you're working on State Machine Workflows, my technique for dealing with errors that require human intervention to fix is creating an additional 'stateactivity' node that indicates an 'error' state, something like STATE_FAULTED. Then every state has a faulthandler that catches any exception, logs the exception and changes state to STATE_FAULTED, passing information like current activity, type of exception raised and any other context info you might need.
In the STATE_FAULTED initialization you can listen for an external command (your Resume() command, or whatever suits your needs), and when everything is OK you can just switch to the previous state and resume execution.
I am afraid that isn't going to work. Error handling in a workflow is similar to a Try/Catch block and the only way to retry is to wrap everything is a loop and just execute the loop again if something was wrong.
Depending on the sort of error you are trying to deal with you might be able to achieve your goal by creating custom activities that wrap their own execution logic in a Try/Catch and contain the required retry logic.