multiple interrupt handlers within a single driver - linux-device-driver

Assuming we have multiple interrupt lines (multiple FPGAs) and each line is associated with a certain address. Does having multiple interrupt handlers in the same driver a thing? Based on the ioread32() address outcome if the result points to FPGA1 then I can associate the request_irq() with InterruptHandler1 FPGA2 with InterruptHandler2 and so on... Is that the correct approach if 2 interrupts of different FPGAs occur at the same time? Would the driver be able to process both?

Related

Omnetpp application sends multiple streams

Let's say I have a car with different sensors: several cameras, LIDAR and so on, the data from this sensors are going to be send to some host over 5G network (omnetpp + inet + simu5g). For video it is like 5000 packets 1400 bytes each, for lidar 7500 packets 1240 bytes and so on. Each flow is encoded in UDP packets.
So in omnetpp module in handleMessage method I have two sentTo calls, each is scheduled "as soon as possible", i.e., with no delay - that corresponds to the idea of multiple parallel streaming. How does omnetpp handle situations, when it needs to send two different packets at the same time from the same module to the same module (some client, which receives sensor data streams)? Does it create some inner buffer on the sender or receiver side, therefore allowing really only one packet sending per handleMessage call or is it wrong? I want to optimize data transmission and play with packet sizes and maybe with sending intervals, so I want to know, how omnetpp handles multiple streaming at the same time, because if it actually buffers, maybe than it makes sense to form a single package from multiple streams, each such package will consist of a certain amount of data from each stream.
There is some confusion here that needs to be clarified first:
OMNeT++ is a discrete event simulator framework. An OMNeT++ model contains modules that communicate with each other, using OMNeT++ API calls like sendTo() and handleMessage(). Any call of the sendTo() method just queues the provided message into the future event queue (an internal, time ordered queue). So if you send more than one packet in a single handleMessage() method, they will be queued in that order. The packets will be delivered one by one to the requested destination modules when the requested simulation time is reached. So you can send as many packets as you wish and those packets will be delivered one by one to the destination's handleMessage() method. But beware! Even if the different packets will be delivered one by one sequentially in the program's logic, they can still be delivered simultaneously considering the simulation time. There are two time concepts here: real-time that describes the execution order of the code and simulation-time which describes the time passes from the point of the simulated system. That's why, while OMNeT++ is a single threaded application that runs each events sequentially it still can simulate infinite number of parallel running systems.
BUT:
You are not modeling directly with OMNeT++ modules, but rather using INET Framework which is a model directly created to simulate internet protocols and networks. INET's core entity is a node which is something that has network interface(s) (and queues belonging to them). Transmission between nodes are properly modeled and only a single packet can travel on an ethernet line at a time. Other packets must queue in the network interface queue and wait for an opportunity to be delivered from there.
This is actually the core of the problem for Time Sensitive Networks: given a lot of pre-defined data streams in a network, how the various packets interfere and affect each other and how they change the delay and jitter statistics of various streams at the destination, Plus, how you can configure the source and network gate scheduling to achieve some desired upper bounds on those statistics.
The INET master branch (to be released as INET 4.4) contains a lot TSN code, so I highly recommend to try to use it if you want to model in vehicle networks.
If you are not interested in the in-vehicle communication, bit rather want to stream some data over 5G, then TSN is not your interest, but you should NOT start to multiplex/demultiplex data streams at application level. The communication layers below your UDP application will fragment/defragment and queue the packets exactly how it is done in the real world. You will not gain anything by doing mux/demux at application layer.

Need for multi-threading in Systemverilog using fork-join

In most text books advocating layered testbench designs, it is recommended that different layers/block run in parallel. I'm currently unable to figure out the reason why is it so. Why cannot we follow the following sequence.
repeat for 1000 tests
generate a transaction
drive the transaction on the DUT
monitor the transaction on the DUT
compare output with a reference
Instead, what is recommended is that all four blocks generator, driver, monitor and scoreboard/checker should run in parallel. My confusion is that why do we avoid the above mentioned sequential behavior in which we go through tests one test case at a time and prefer different blocks running in parallel.
Some texts say that it is because that is how things are done in hardware, i.e. everything runs in parallel. However, the layered testbench is not needed to model any synthesizable hardware. So, why do we have to restrict our verification enivornment/testbench to follow these hardware-like behavior.
A sample block diagram that I'm referring to is given below:
Suppose that you have a fifo which you want to test. Your driver pushes data into it, and the monitor checks the other end. The data gets pushed when it is available and till the fifo is full, the consumer on the other end reads data when it can. So, the pipe gets sometimes full, sometimes empty.
When the fifo is full, the driver must stop. The monitor works always, but its values do not change at the same frequency as the stimuli and it is delayed due to the fifo depth.
In your example, when the fifo is full, the stopped driver will block the whole loop, so the monitor will not work either. Of course, you can come up with some conditional statements which will bypass stopped driver. But you will need to run the monitor and the scoreboard every time, even if the data is not changing.
With more complicated designs with multiple fifos, pipelines, delays, clock frequencies, etc., your loop will become so complicated that it would be difficult if not impossible to manage.
The problem is that in the simple programming it is not possible to express block/wait conditions for statement without blocking the whole loop. It is much easier to do with parallel threads.
The general approach is to run driver and monitor in separate simulation threads. Monitor in this case waits for the data to appear and does not block the driver. The driver pushes data when it is available and can be blocked by fifo full or if there is nothing to drive. It does not block the monitor.
With a single monitor, you can probably pack the scoreboard in the same thread with the monitor, but with multiple monitors it will be problematic, in particular when all monitors run in separate threads. So, the scoreboard should run as a separate thread as well.
You are mixing two different concepts. The layered approach is a software concept that helps manage different abstraction levels from software transactions (a frame of data) to the individual pin wiggles. These layers are very similar to OSI Network Model. Layering also help with maintenance and reusability by defining clear interfaces that enable you to build up a larger system. It's hard to see the benefits of this on a testbench for a small combinational block.
Parallelism come into play for other reasons. There are relatively few complete designs out there that can be tested as a single stream of inputs and then comparing the output to a reference model. You might be able to test one small block of a design this way, but not a complete chip as it typically has many interfaces that need to be driven in parallel.
But let's take the case of two simple blocks that you tested individually with the approach above. Now you want to connect them together where the output of the first DUT becomes the driver of the second DUT
Driver1 -> DUT1 -> DUT2 -> Monitor2
This works best if I originally write the drivers and monitors as separate objects running in parallel.

How Axon framework's sequencing policy works in terms of statefulness

In Axon's reference guide it is written that
Besides these provided policies, you can define your own. All policies must implement the SequencingPolicy interface. This interface defines a single method, getSequenceIdentifierFor, that returns the sequence identifier for a given event. Events for which an equal sequence identifier is returned must be processed sequentially. Events that produce a different sequence identifier may be processed concurrently.
Even more, in this thread's last message it says that
with the sequencing policy, you indicate which events need to be processed sequentially. It doesn't matter whether the threads are in the same JVM, or in different ones. If the sequencing policy returns the same value for 2 messages, they will be guaranteed to be processed sequentially, even if you have tracking processor threads across multiple JVMs.
So does this mean that event processors are actually stateless? If yes, then how do they manage to synchronise? Is the token store used for this purpose?
I think this depends on what you count as state, but I assume that from the point of view your looking at it, yes, the EventProcessor implementations in Axon are indeed stateless.
The SubscribingEventProcessor receives it's events from a SubscribableMessageSource (the EventBus implements this interface) when they occur.
The TrackingEventProcessor retrieves it's event from a StreamableMessageSource (the EventStore implements this interface) on it's own leisure.
The latter version for that needs to keep track of where it is in regards to events on the event stream. This information is stored in a TrackingToken, which is saved by the TokenStore.
A given TrackingEventProcessor thread can only handle events if it has laid a claim on the TrackingToken for the processing group it is part of. Hence, this ensure that the same event isn't handled by two distinct threads to accidentally update the same query model.
The TrackingToken also allow multithreading this process, which is done by segmented the token. The number of segments (adjustable through the initialSegmentCount) drives the number of pieces the TrackingToken for a given processing group will be partitioned in. From the point of view of the TokenStore, this means you'll have several TrackingToken instances stored which equal the number of segments you've set it to.
The SequencingPolicy its job is to drive which events in a stream belong to which segment. Doing so, you could for example use the SequentialPerAggregate SequencingPolicy to ensure all the events with a given aggregate identifier are handled by one segment.

how to set timer for physical process in Castalia?

As the usual practice in Castalia is that the application module requests for sensor reading using requestsensorreading() function which is handled by sensor manager. Sensor manager forwards the request to physical process and the physical process replies back with its value.
What i want to do is, i want the physical process to broadcast its value at set intervals of time. Sensor device will have a sensitivity > 0 and few nodes will receive the value. How can i accomplish this? is it possible to use timerFiredCallback function and BROADCAST_NETWORK_ADDRESS inside physical process?
You seem to be confused about the basic models of Castalia. The physical process is not a sensor node to send network broadcast messages. It is a module to model
the physical process that sensors in our sensor nodes are sampling. Moreover, a Physical process does not have one value. Values are changing depending on space and time, and depending on the specific model you have defined (the manual has plenty of info on how to define physical processes).You could define a physical process that only returns one value for every point in space and every point in time, but I am not sure why you would like to use such a process in simulation.
A physical process does not "broadcast its value". Sensor nodes sample the physical process and based on space, time, and the specific model of the process they get a value back. Different sensors nodes might get different values back. To achieve what you want, you simply make all sensor nodes periodically sample the physical process. There are some examples of Applications that do that.
So to recap: You define how your physical process needs to behave and then you make sensor nodes sample it (from the Application module using the method requestSensorReading() as you already know).

Difference between interrupt and event

What is the difference between interrupt and an event?
These two concepts both offer ways for the "system/program" to deal with various "conditions" which take place during the normal unrolling of some program, and which may require the "system/program" to do something else, before returning (or not...) to the original task. However, aside from this functional similarity, they are very distinct concepts used in distinct contexts, at distinct levels.
Interrupts provide a low-level device to interrupting the normal unrolling of whatever piece of program the CPU is working on a a given time, and to have the CPU start processing instructions at another address. Interrupts are useful to handle various situations which require the CPU's immediate processing (for example to deal with keystrokes, or the arrival of new data in a serial communication channel).
Many interruptions are produced by hardware (by some electronic device changing the polarity on one of the pins of the CPU), but there are also software interrupts which are cause by the program itself invoking a particular instruction. (or also by the CPU detecting something is astray with regard to itself or the program running).
A very famous interrupt is INT 0x21 which program invoke[d] to call services from MS-DOS.
Interrupts are typically dispatched by way of vector tables, whereby the CPU has a particular location in memory containing an array of addresses [where particular interrupt handlers reside]. By modifying the content of the interrupt table [if it is so allowed...], a program can redefine which particular handler will be called for a given interrupt number.
Events, on the other hand, are system/language-level "messages" which can be used to signify various hardware or software situations (I'd use the word event), such as Mouse clicks, keyboard entries, but also application-level situations such as "New record inserted in database" or highly digested requests and messages, used in modular programs for communication/requests between various parts of the program.
Unlike interrupts with their [relatively simple] behavior which is fully defined by the CPU, there exist various event systems systems, at the level of the operating system as well as various frameworks (ex: MS Windows, JavaScript, .NET, GUI frameworks like QT etc..). All events systems, while different in their implementations, typically share common properties such as
the concept of a handler, which is a particular function/method of the program which is designated to handle particular types of event from particular event sources.
the concept of an event, which is a [typically small] structure containing information about the event: its type, its source, custom parameters (which semantics depend on the event type)
a queue where events are inserted by sources and polled by consumers/handlers (or more precisely by dispatchers, depending on system...)
Interrupts are implemented inside the hardware (CPU) to interrupt the usually linear flow of a program. This is important for external events like keyboard input but also for interrupting programs in multi-tasking operating systems.
Events are a means of software engineering and probably most often known from GUI toolkits. There, the toolkit/OS wraps happenings like keystrokes or mouse input into "events". Those are then dispatched to programs that went to register themselves for receiving such events. It's maybe a bit like a mailing system.
To compare both, from a userspace program view:
-Interrupts would force your program to halt in order to let some lower-level code execute (like OS code)
-Events usually are sent to you from lower-level code and trigger execution of your code