How to connect analysis_port to sequence - system-verilog

I have a sequence that needs to know when certain things have happened in the DUT to decide when to send the next item in the sequence, and I'm trying to find the best way to get that information to the sequence. I wanted to connect an analysis_port to the sequence since my testbench already has an analysis_port with the information I need, but my understanding is that analysis_ports need to be connected in the connect_phase, but uvm_sequence doesn't have a connect_phase. So is there anyway to connect an analysis_port to a uvm sequence? Or is there any other way to get the information from an analysis_port to a sequence?
My use case is the following:
I am verifying a write-through cache and writing a sequence that sends some stores to the input of the cache. Due to ordering requirements of the DUT and the intention of this sequence, this sequence needs to wait until each store is seen on the output of the cache (written-through) before sending the next one.

You mean an analysis_export or analysis_imp...
How about doing this: extend your uvm_sequenceR and add the analysis_imp to that and implement a write function there. The write function needs to store the received information somehow, eg by writing it to a queue (which would also be a data member of your extended sequencer class).
Your sequence can then access that store easily, because it can get easily get a pointer to the sequencer using p_sequencer or get_sequencer().

Related

What is the purpose of a driver in UVM?

Since the generator generates test stimuli for the DUT (Design Under Test), why not feed them directly? What is the need for a driver? Please enlighten me with an example if possible.
UVM is based on transaction-level modeling (TLM). This means that you can model things like memory reads and writes at a high level of abstraction using a transaction object. For example, a memory transaction can be represented by:
Data value
Address
Direction (read or write)
In UVM, we typically create a uvm_agent which consists of a sequencer, a driver and a monitor. The sequencer plays the role of the transaction generator. You will also create a sequence which is attached to the sequencer to generate several transactions. Each transaction object is sent from the sequencer to the driver.
When the driver receives a transaction object, it looks at the object properties and assigns the transaction values to the interface signals. In the testbench, the interface signals are connected directly to the DUT ports.
The driver contains more details than the sequencer or transaction. It controls the timing of the signals, adding delays, wait-states, etc.
See also chapter 1 of the UVM User's Guide
The UVM was developed using the Separation of Concerns principle in software programming. In Object Oriented Programming, this usually means giving a class one basic responsibility and passing data on to another class for another responsibility.
The UVM separates the job of generating stimulus from driving the signals into the DUT to enable re-use at different stages of abstraction because of different levels of unit testing. In the UVM, you will see that even the passing of data is abstracted into a separate set of classes called TLM ports and exports.
An example might be a design that processes an encrypted image received through a serial port. Your design will probably have stages the deserialized the serial input, decrypted the input, and finally stores an image in memory. Depending on which stage you are testing, you would want to use the same image generator and send the image through a sequence, and have layered sets of sequences to translate to the particular stage you are testing.

CANOpen same object mapped to multiple TPDOs

I have a slave device with multiple TPDOs (4) for sending certain sensor data. Each TPDO has about 4 bytes of data and I want to insert a 'count' in the frame to indicate data is not stale. My plan is to create an object entry for this and map it to each PDO as 5th byte. Is this allowed by the CANOpen standard and of so, is this is a good idea at all?
PS: I am not sending all 8 bytes in 1 TPDO because of the 4 bytes of in 1 TPDO have a co-relation to each other.
Yes, it is allowed to map a (sub)object to multiple PDOs, or even multiple times to the same PDO. When using dummy mappings in RPDOs, this is actually quite common.
Whether inserting a count is a good idea depends on what you are trying to achieve. What is the problem you are trying to detect and how do you want to handle it?
If you want to check that the slave is alive and healthy, use heartbeats. If you want to check that you didn't miss a PDO, there are other ways. For SYNC-driven PDOs, you can set a flag for each PDO when you receive it and at the SYNC, check if you received them all before clearing the flags. For event-driven PDOs, you can use the event timer in the RPDO to generate an error if a PDO didn't arrive within a certain time.
Inserting a counter will work and help you detect how many PDOs you missed. But the question is, what can you do with that information? The last PDO, even if "stale", is usually still the best guess for the value at the receiving side.

How to store sagas’ data?

From what I read aggregates must only contain properties which are used to protect their invariants.
I also read sagas can be aggregates which makes sense to me.
Now I modeled a registration process using a saga: on RegistrationStarted event it sends a ReserveEmail command which will trigger an EmailReserved or EmailReservationFailed given if the email is free or not. A listener will then either send a validation link or a message telling an account already exists.
I would like to use data from the RegistrationStarted event in this listener (say the IP and user-agent). How should I do it?
Storing these data in the saga? But they’re not used to protect invariants.
Pushing them through ReserveEmail command and the resulting event? Sounds tedious.
Project the saga to the read model? What about eventual consistency?
Another way?
Rinat Abdullin wrote a good overview of sagas / process managers.
The usual answer is that the saga has copies of the events that it cares about, and uses the information in those events to compute the command messages to send.
List[Command] processManager(List[Event] events)
Pushing them through ReserveEmail command and the resulting event?
Yes, that's the usual approach; we get a list [RegistrationStarted], and we use that to calculate the result [ReserveEmail]. Later on, we'll get [RegistrationStarted, EmailReserved], and we can use that to compute the next set of commands (if any).
Sounds tedious.
The data has to travel between the two capabilities somehow. So you are either copying the data from one message to another, or you are copying a correlation identifier from one message to another and then allowing the consumer to decide how to use the correlation identifier to fetch a copy of the data.
Storing these data in the saga? But they’re not used to protect invariants.
You are typically going to be storing events in the sagas (to keep track of what has happened). That gives you a copy of the data provided in the event. You don't have an invariant to protect because you are just caching a copy of a decision made somewhere else. You won't usually have the process manager running queries to collect additional data.
What about eventual consistency?
By their nature, sagas are always going to be "eventually consistent"; the "state" of an instance of a saga is just cached copies of data controlled elsewhere. The data is probably nanoseconds old by the time the saga sees it, there's no point in pretending that the data is "now".
If I understand correctly I could model my saga as a Registration aggregate storing all the events whose correlation identifier is its own identifier?
Udi Dahan, writing about CQRS:
Here’s the strongest indication I can give you to know that you’re doing CQRS correctly: Your aggregate roots are sagas.

Updating last accessed time when separating Commands and Queries

Consider a function: IsWalletValid(walletID). It returns true if the walletID exists in the database, and updates a 'last_accessed_time' field.
A task runs periodically to remove any wallets that have not been accessed for a set period of time.
Seems like an easy solution for what we want to do, but IsWalletValid() has a side effect because it writes to the database.
Should we add an additional 'UpdateLastAccessedTime(walletID)' function? Everytime we call IsWalletValid() we will also need to remember to call UpdateLastAccessedTime(walletID).
Do verifying that a wallet is valid and updating it's last_accessed_time field need to be transactionally consistent (ACID)? You could use eventual consistency here:
The method IsWalletValid publishes an WalletAccessed event, then an event handler updates last_accessed_time asynchronously.
if last_accessed_time is not accessed by domain logic to make decisions on any write handling this could just be a facet of the read only projection. Seems like this is the same concern as other more verbose read audit concerns. Just because data is being written and maintained doesn't mean that it necessarily needs to be part of the write model of the system. If you did however want to implement this as part of the domain and perhaps stored within the same event store it could be considered a separate auditing context outside of the boundary of the original aggregate being audited.

How do I model a queue on top of a key-value store efficiently?

Supposed I have a key-value database, and I need to build a queue on top of it. How could I achieve this without getting a bad performance?
One idea might be to store the queue inside an array, and simply store the array using a fixed key. This is a quite simple implementation, but is very slow, as for every read or write access the complete array must be loaded / saved.
I could also implement a linked list, with random keys, and there is one fixed key which acts as starting point to element 1. Depending on if I prefer a fast read or a fast write access, I could let point the fixed element to the first or the last entry in the queue (so I have to travel it forward / backward).
Or, to proceed with that - I could also have two fixed pointers: One for the first, on for the last item.
Any other suggestions on how to do this effectively?
Initially, key-value structure is extremely similar to the original memory storage where the physical address in computer memory plays as the key. So any type of data structure could be modeled upon key-value storage surely, including linked list.
Originally, a linked list is a list of nodes including the index information of previous node or following node. Then the node it self should also be viewed as a sub key-value structure. With additional prefix to the key, the information in the node could be separately stored in a flat table of key-value pairs.
To proceed with that, special suffix to the key could also make it possible to get rid of redundant pointer information. This pretend list might look something like this:
pilot-last-index: 5
pilot-0: Rei Ayanami
pilot-1: Shinji Ikari
pilot-2: Soryu Asuka Langley
pilot-3: Touji Suzuhara
pilot-5: Makinami Mari
The corresponding algrithm is also imaginable, I think. If you could have a daemon thread for manipulation these keys, pilot-5 could be renamed as pilot-4 in the above example. Even though, it is not allowed to have additional thread in some special situation, the result of the queue it self is not affected. Just some overhead would exist for the break point in sequence.
However which of the two above should be applied is the problem of balance between the cost of storage space or the overhead of CPU time.
The thread safe is exactly a problem however an ancient problem. Just like the class implementing the interface of ConcurrentMap in JDK, Atomic operation on key-value data is also provided perfectly. There are similar methods featured in some key-value middleware, like memcached, as well, which could make you update key or value separately and thread safely. However these implementation is the algrithm problem rather than the key-value structure it self.
I think it depends on the kind of queue you want to implement, and no solution will be perfect because a key-value store is not the right data structure for this kind of task. There will be always some kind of hack involved.
For a simple first in first out queue you could use a few kev-value stores like the folliwing:
{
oldestIndex:5,
newestIndex:10
}
In this example there would be 6 items in the Queue (5,6,7,8,9,10). Item 0 to 4 are already done whereas there is no Item 11 or so for now. The producer worker would increment newestIndex and save his item under the key 11. The consumer takes the item under the key 5 and increments oldestIndex.
Note that this approach can lead to problems if you have multiple consumer/producers and if the queue is never empty so you cant reset the index.
But the multithreading problem is also true for linked lists etc.