What is the purpose of a driver in UVM? - system-verilog

Since the generator generates test stimuli for the DUT (Design Under Test), why not feed them directly? What is the need for a driver? Please enlighten me with an example if possible.

UVM is based on transaction-level modeling (TLM). This means that you can model things like memory reads and writes at a high level of abstraction using a transaction object. For example, a memory transaction can be represented by:
Data value
Address
Direction (read or write)
In UVM, we typically create a uvm_agent which consists of a sequencer, a driver and a monitor. The sequencer plays the role of the transaction generator. You will also create a sequence which is attached to the sequencer to generate several transactions. Each transaction object is sent from the sequencer to the driver.
When the driver receives a transaction object, it looks at the object properties and assigns the transaction values to the interface signals. In the testbench, the interface signals are connected directly to the DUT ports.
The driver contains more details than the sequencer or transaction. It controls the timing of the signals, adding delays, wait-states, etc.
See also chapter 1 of the UVM User's Guide

The UVM was developed using the Separation of Concerns principle in software programming. In Object Oriented Programming, this usually means giving a class one basic responsibility and passing data on to another class for another responsibility.
The UVM separates the job of generating stimulus from driving the signals into the DUT to enable re-use at different stages of abstraction because of different levels of unit testing. In the UVM, you will see that even the passing of data is abstracted into a separate set of classes called TLM ports and exports.
An example might be a design that processes an encrypted image received through a serial port. Your design will probably have stages the deserialized the serial input, decrypted the input, and finally stores an image in memory. Depending on which stage you are testing, you would want to use the same image generator and send the image through a sequence, and have layered sets of sequences to translate to the particular stage you are testing.

Related

Correct aproach to model network interaction in Enterprise Architect

I have a class Actor whose instances send/receive network messages. (E.g. each instance of that class is part of a different process running on a different physical machine.) The network messages are serialized instances of classes MessageA and MessageB whose attributes are sent over the wire. An incoming message is handled by a callback method method of my Actor class. An ougoing message is triggered by calling a method of my Actor class.
Hence, I started to model this situation in a class diagram like this:
The network messages are "signals" in EA term, i.e. classes with a special prototype (for succinctness the attributes are left out)
My Actor-class is an usual class in EA with four corresponding methods
Now, I want to model a typical interaction and started to draw the following sequence diagram:
The messages are no methods invocations, but are asynchronous and have kind "signal" which allows me to assign them the correct message type.
However, I wonder how I model
the fact that a message with payload MessageA is handled by onMessageAReceived
that method sendMessageA emits a message with payload MessageA
(Note: In terms of my implementation it is correct, that sendMessageA returns void, because sending a network message is asynchronous, offloaded to the underlying OS and the method returns to its callee after having send the message.)
in the sequence diagram.
Maybe, my whole approach is completely wrong and I am trying to model something which cannot be modeled like that. In that case some pointers to the correct approach are highly welcome.
Of course there's more than one way to model this (and it does not depend on the tool EA). So, you should ask which audience you are talking to, repsectively which their domain is basically.
Technical
A SD is well suited to show a physical transport. In that case you concentrate on the way how messages are sent. In this case you will have the physical operations shown as messages. E.g. using sockets, it would be some (a-)synchronous send(message) which assures that the content message is transported from A to B. This could be at any level of technical implementation from rough to single CRCs being sent (or how the operation is internally built to ensure packages are not lost).
Logical
In order to show a more logical aspect it's a good idea to have components (being deployed on multiple hardware) having ports (realizing some interface) along which you have an information flow (which is a connector you will find in EA) that can transport something (that is your message classes).
Overview
You might want to describe both aspects in your model. But likely you will have the focus on the one or other part depending on your overall domain.
There is no single way to model something. Models are always abstraction which is why we create models. They shall show reality, but more light weight.

hexagonal architecture and transactions concept

I'm trying to get used to hexagonal architecture and can't get how to implement common practical problems, already realized with different approaches. I think my core problem is to understand level of responsibility extracted to adapter and ports.
Reading articles on the web it is ok with primitive examples like:
we have RepositoryInterface which can be implemented in
mysql/txt/s3/nosql storage
or
we have NotificationSendingInterface and have email/sms/web push realizations
but those are very refined examples and simply interface/realization details separation.
In practice, however, coding service in domain model we usually know interface+realization guarantees more deeply.
For illustration purpose example I decided to ask about storage+transaction pair.
How transaction conception for storage should be implemented in hex architecture?
Assume we have simple crud service interface inside domain level
StorageRepoInterface
save(...)
update(...)
delete(...)
get(...)
and we want some kind of transaction guarantee while working with those methods, e.g. delete+save in one transaction.
How it should be designed and implemented according to hex conception?
Is it should be implemented with some external coordination interface of TransactionalOperation? If yes, then in general, TransactionalOperation must know how to implement transaction guaranty working with all implementations of StorageRepoInterface(mb within additional transaction-oriented operation interface)
If no, then seems there should be explicit transaction guarantees from StorageRepoInterface in the domain level(inside hex) with additional methods?
Either way it is no look so "isolated and interfaced based" as stated.
Can someone point me how to change mindset correctly for such situations or where to read?
Thanks in advance.
In Hex Arch, driver ports are the API of the application, the use case boundary. Use cases are transactional. So you have to control the transactionality at the driver ports methods. You enclose every method in a transaction.
If you use Spring you could use declarative transaction (#Transactional annotation).
Another way is to explicity open a db transaction before the execution of the method, and to close (commit / rollback) it after the method.
A useful pattern for applying transactionality is the command bus, wrapping it with a decorator which enclose the command in a transaction.
Transactions are infraestructure, so you should have a driven port and an adapter implementing the port.
The implementation must use the same db context (entity manager) used by persistence adapters (repositories).
Vaughn Vernon talks about this topic in the "Managing transactions" section (pages 432-437) of his book "Implementing DDD".
Instead of using command bus pattern, you could simply inject a TransactionPort to your command handler (defined at domain level).
The TransactionPort would have two methods (start and commit).
The TransactionAdapter would be your custom implementation (defined at infrastructure level).
Then you could do somethig like:
this.transactionalPort.start();
# Do you stuff
this.transactionalPort.commit();

What is the purpose of register model in UVM?

In UVM, the testbench does not have any visibility into the internal registers of the DUT. Then why is there mirroring and creation of Register models in the UVM testbench architecture? What purpose does it serve?
The testbench would not come to know if any status bit etc is ever updated or not inside DUT since it only has access to its input output ports.
The DUT may not have direct access internal registers via ports, but some registers are accessible via an interface protocol. The register model is primarily intended for these registers. But you can access any register in the design via the backdoor (but not always desirable as it requires more work to setup and maintain).
The mirror stores the value of what the test-bench thinks are the register values of the DUT. When you do a .mirror(), the register model compares the register value (actual) verses the mirror (expected).
Status bits are often complicated to predict. To simplify things you can turn off the compare of the field (or register) with .set_compare(UVM_NO_CHECK). If you disable the check at the field level, other fields in the same register will still be compared.
If your ambiances and want to do more complex predictions/mirror-compare on status bits, then you do have the options, such as register callbacks or of extending the uvm_reg and uvm_reg_field classes to overwrite the .predict and .mirror methods.
UVM RAL provides several benefits
It provides high-level abstraction for reading and writing registers in your design. This is especially useful when the RTL for your registers has been compiled from another description. All of the addresses and bit fields can get replaced with human readable names.
Your test can be written independent of the physical bus interface. Just call the read/write methods.
The mirrored registers make it easy to know the state/configuration of your DUT without having to add your own set of mirrored variables, or perform extra read operations.
Register model is an entity that represents the hierarchical data structure of the class object for every register and its individual field. A register model (or register abstraction layer) could be a set of classes that model the memory mapped behavior of registers and memories within the DUT so as to facilitate stimulus generation. we can perform read and write operation on design employing a RAL model. It goes to mirror the design registers by making a model within the verification surroundings. By applying the stimulus to the register model, the actual design registers can exhibit the changes applied by the stimulus.
The advantage of the RAL model comes from the high level of abstraction provided. It provides back door access for registers and memory with easy integration liability in UVM verification environment. Whenever a read or write operation is performed, the RAL Model will be automatically updated. It supports design with multiple physical interfaces.
For more information, use this link.
Thanks,
Mayank

Bus Functional Models (System Verilog)

I'm looking to design a bus functional model for SPI and UART bus. First, I want to understand if my perception of a bus functional model is correct.
It is supposed to simulate bus transactions, without worrying about the underlying implementation specifics.
For instance,
If I were to build a BFM for an SPI bus, the design should be able to simulate transactions on the SPI acting as a master based on some protocol for example reading instructions from a file and then showing them on a simulator accordingly,
For example a generic data transfer instruction, like send_write(0x0c, 0x0f), that sends data byte 0c to the slave address 0f, should drive the Chip Select line low and send the data bits accordingly on the clock edge based on the SPI mode, is my understanding of BFM correct in this case?
Now what I don't understand this is, how is it helpful?
Where in between the DUT and the TestBench does a BFM sit, and how does it help a system designer.
I also want to know if there are any reference BFMs that have been built and tested that are available to study,
I'd appreciate if someone could help me with an example, preferably in System Verilog.
So had to research a lot on this so I thought I would answer, but here's an idea of what it is,
Think of a Bus Functional Model(BFM) that simulates transactions of a bus, like READ and WRITE, reducing the overhead of a testbench of taking care of the timing analysis for the same. There are a lot more interpretations of a BFM, BFMs typically reduce the job of a testbench by making it more data focused.
Okay that was a high-level answer, let's dig a little deeper,
Think of BFM as a block that sits within the testbench block as a whole, when the test bench needs to perform a task, for instance, wants to write at a particular address, it asks the BFM to write at that address, the BFM, which is a black box to the test-bench does the transaction whilst taking care of the timing. It can be driven by a file that could be loaded by the test-bench or it could be a bunch of tasks that the test-bench uses to do the transactions.
The Design Under Test's(DUTs) response to the BFM transacts is of interest to the tester for the design. One may argue that the BFM may change based on DUT, which is what distinguishes a better BFM per say.
If the BFM could have a configuration vector that could be loaded to initialize and behave according to the DUT specifications, then it becomes portable to helping test other designs.
Further the BFM, may be defined as abstract virtual functions(in SV), that could have further concrete implementations based on the DUT.
virtual class apb_bfm;
pure virtual task writeData(int unsigned addr, int unsigned data);
pure virtual task readData (int unsigned addr, output int unsigned data);
pure virtual task initializeSignals();
endclass
The above BFM abstraction is for the APB Master, that does the tasks mentioned, the low level details of each of these must be encapsulated by interfaces and clocking blocks in order to have sanity of the clocks as well as abstract interface types. I have referenced a book in the section, which describes beautifully how to architect test benches and design Transaction Level Models(TLMs). Reading this will give you a good understanding of how to design one.
Also this paper on abstract BFMs gives a good idea on how BFMs should be modeled for a good design. The APB example is derived off that.
The following image on how a BFM could be placed in the test framework was what I could gather from Bergeron's book.
Hopefully that gives a basic understanding of what BFMs are. Of course writing one and getting it to work is difficult but this basic knowledge would allow you to have a high level picture of it.
Book Reference:
Bergeron, J. (n.d.). Writing TestBenches in System Verilog. Springer.
A BFM is a VIP with dual roles. It can act as a Driver or a Monitor/Receiver. In a Driver role, it packs the transactions and makes it drive on a signal level using the interface handle, else the DUT is unable to accept the transactions (it only has signal interface). As a receiver, it unpacks the signal bits coming through the interface handle, to transactions and sends it to Scoreboard/Checker. It can also act as a Protocol Checker in some cases.
You can get a good example of BFM usage here
http://www.testbench.in/SL_00_INDEX.html

How to update regmodel with writes going from RTL blocks

I understand that regmodel values are updated as soon as transaction initiates from test environment on any of the connected interfaces.
However consider a scenario:
RTL registers being updated from ROM on boot-up (different value than default)
Processor in RTL writing to register as compared to test environment.
In these 2 cases regmodel doesn't get updated/mirrored with correct RTL value. I would like to know what is correct procedure to get regmodel updated, if there is none at the moment what other approach can be taken to keep these 2 in sync?
For the first case you have to pre-load your memory model with your ROM contents at the start of the simulation. There isn't any infrastructure to do this in uvm_memory (unfortunately), so you'll have to implement it yourself.
For the second case, you have to use a more grey-box approach to verification. You have to monitor the bus accesses that the processor does to the peripherals and update the register values based on those transactions. This should be OK to do from a maintainability point of view, because the architecture of your SoC should be pretty stable in that sense (you've already decided to use a processor so that will always be there, but you might not know which peripherals will make it to the end).