Basic UVM sequence simulation query - system-verilog

I have a couple of issues with a basic UVM based TB I'm trying out to understand sequences and their working.
bvalid is being always picked as 0 in the driver when being updated in the response item
Couple of error messages for last 2 transactions (# UVM_ERROR # 18: uvm_test_top.axi_agent1.axi_base_seqr1##axi_base_seq1 [uvm_test_top.axi_agent1.axi_base_seqr1.axi_base_seq1] Response queue overflow, response was dropped)
Here is the link to the compiling code on EDA Playground
http://www.edaplayground.com/x/3x9
Any suggestions on what I'm missing??
Thanks
venkstart

Having a look at the specification for $urandom_range it shows the signature as: function int unsigned $urandom_range( int unsigned maxval, int unsigned minval = 0 ). Change your call to $urandom_range(1, 0) and it should work.
The second error comes from the fact that you are sending responses from the driver and not picking them up in your sequence. This is the line that does it: seq_item_port.item_done(axi_item_driv_src);. Either just do seq_item_port.item_done(); (don't send responses) or put a call to get_response() inside your sequence after finish_item(). What I usually do is update the fields of the original request and just call item_done(). For example, if I start a read transaction, in my driver I would drive the control signals and wait for the DUT to respond, update the data field of the request with the data I got from the DUT and call item_done() in my driver to mark the request as done. This way if I need this data in my sequence (to constrain some future item, for example) I have it.

Related

what's the application of sequential transmission of I2C in HAL library in STM32f746ng

I can understand that you can use first frame option for first frame and next frame options for others, but since you can use them as FIRS_FRAME_LAST_FRAME, what is the advantage of other? and when we must use them?
Findings:
A code use wile to continuously transmit two number and get a callback to see if module has accepted that, if this happen correctly the led must blink.
In this simple code I've tested every xferoption of sequential transmission, every options worked except: I2C_LAST_FRAME_NO_STOP and I2C_FIRST_FRAME.
Code:
while (1)
{
value=300;
*(uint16_t*) buffer=(value<<8)|(value>>8);//Data prepared for DAC module
HAL_I2C_Master_Seq_Transmit_IT (&hi2c1, (MCP4725A0_ADDR_A00<<1), buffer, 2,I2C_LAST_FRAME_NO_STOP);
HAL_Delay(1);
HAL_I2C_Master_Receive(&hi2c1, (MCP4725A0_ADDR_A00<<1), rxbuffer, 3, 1000);
if( (uint16_t)(((uint16_t)rxbuffer[1])<<8|((uint16_t)rxbuffer[2]))>>4 == value ){
HAL_GPIO_WritePin(LED_GPIO_Port,LED_Pin,GPIO_PIN_SET);}
HAL_Delay(50);
value=4000;
*(uint16_t*) buffer=(value<<8)|(value>>8);
HAL_I2C_Master_Seq_Transmit_IT (&hi2c1, (MCP4725A0_ADDR_A00<<1), buffer, 2,I2C_LAST_FRAME_NO_STOP);
HAL_Delay(1);
HAL_I2C_Master_Receive(&hi2c1, (MCP4725A0_ADDR_A00<<1), rxbuffer, 3, 1000);
if( (uint16_t)(((uint16_t)rxbuffer[1])<<8|((uint16_t)rxbuffer[2]))>>4 == value ){
HAL_GPIO_WritePin(LED_GPIO_Port,LED_Pin,GPIO_PIN_RESET);}
HAL_Delay(50);
}
The HAL sometimes poorly documents these variables functions, and you will need to dive into the reference manual !
Looking at what the #defines are
https://github.com/STMicroelectronics/STM32CubeF7/blob/f8bda023e34ce9935cb4efb9d1c299860137b6f3/Drivers/STM32F7xx_HAL_Driver/Inc/stm32f7xx_hal_i2c.h#L302-L307
/** #defgroup I2C_XFEROPTIONS I2C Sequential Transfer Options
* #{
*/
#define I2C_FIRST_FRAME ((uint32_t)I2C_SOFTEND_MODE)
#define I2C_FIRST_AND_NEXT_FRAME ((uint32_t)(I2C_RELOAD_MODE | I2C_SOFTEND_MODE))
#define I2C_NEXT_FRAME ((uint32_t)(I2C_RELOAD_MODE | I2C_SOFTEND_MODE))
#define I2C_FIRST_AND_LAST_FRAME ((uint32_t)I2C_AUTOEND_MODE)
#define I2C_LAST_FRAME ((uint32_t)I2C_AUTOEND_MODE)
#define I2C_LAST_FRAME_NO_STOP ((uint32_t)I2C_SOFTEND_MODE)
We can see references to RELOAD and AUTOEND and SOFTEND.
Digging into the reference manual
https://www.st.com/resource/en/reference_manual/rm0385-stm32f75xxx-and-stm32f74xxx-advanced-armbased-32bit-mcus-stmicroelectronics.pdf#page=969
So we can see here the reference to
AUTOEND - as a way to automatically implement a STOP condition after the set bytes end
SOFTEND as a way to prevent the automatic STOP condition and require the software to decide.
Relationship to your observed behaviour
The define's using the SOFTEND mode is where you saw things not working, and this is to be expected, the I2C protocol was not being fulfilled as there was nothing in the code to indicate the STOP condition.
So what does this mean you can do - an example of a variable byte i2c slave receiver
I haven't found a shining example from ST for this, but let me illustrate an example I have implemented in a project for an I2C Slave.
Let us look at the callbacks that are called:
https://github.com/STMicroelectronics/STM32CubeF7/blob/master/Drivers/STM32F7xx_HAL_Driver/Src/stm32f7xx_hal_i2c.c#L76-L97
*** Interrupt mode IO operation ***
===================================
[..]
(+) Transmit in master mode an amount of data in non-blocking mode using HAL_I2C_Master_Transmit_IT()
(+) At transmission end of transfer, HAL_I2C_MasterTxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_MasterTxCpltCallback()
(+) Receive in master mode an amount of data in non-blocking mode using HAL_I2C_Master_Receive_IT()
(+) At reception end of transfer, HAL_I2C_MasterRxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_MasterRxCpltCallback()
(+) Transmit in slave mode an amount of data in non-blocking mode using HAL_I2C_Slave_Transmit_IT()
(+) At transmission end of transfer, HAL_I2C_SlaveTxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_SlaveTxCpltCallback()
(+) Receive in slave mode an amount of data in non-blocking mode using HAL_I2C_Slave_Receive_IT()
(+) At reception end of transfer, HAL_I2C_SlaveRxCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_SlaveRxCpltCallback()
(+) In case of transfer Error, HAL_I2C_ErrorCallback() function is executed and users can
add their own code by customization of function pointer HAL_I2C_ErrorCallback()
(+) Abort a master I2C process communication with Interrupt using HAL_I2C_Master_Abort_IT()
(+) End of abort process, HAL_I2C_AbortCpltCallback() is executed and users can
add their own code by customization of function pointer HAL_I2C_AbortCpltCallback()
(+) Discard a slave I2C process communication using __HAL_I2C_GENERATE_NACK() macro.
This action will inform Master to generate a Stop condition to discard the communication.
Therefore, you could implement a I2C slave that could read a variable/dynamic amount of data:
Receive 1 byte - using the SOFTEND based options
This prevents the stop condition being raised, but once this first byte is received will trigger the HAL_I2C_SlaveRxCpltCallback().
In the HAL_I2C_SlaveRxCpltCallback() check the value of the first byte and then request more data of any further length, but this time using an AUTOEND based option.

Fetching LogBook descriptors in custom firmware

I'm looking to fetch recorded data using LogBook in a custom Movesense firmware. How do I get the correct byte stream offset for the next GET call when receiving HTTP_CONTINUE?
I'm trying to implement these steps as described in DataStorage.md:
### /Logbook usage ###
To get recording from the Movesense sensors EEPROM storage, you need to:
1. Do **GET** on */Logbook/Entries*. This returns a list of LogEntry objects. If the status was HTTP_OK, the list is complete. If the result code is HTTP_CONTINUE, you must GET again with the parameter StartAfterId set to the Id of the last entry you received and you'll get the next entries.
2. Choose the Log that you are interested in and notice the Id of it.
3. Fetch the descriptors with **GET** to */Logbook/byId/<Id>/Descriptors*. This returns a bytestream with the similar HTTP_CONTINUE handling as above. However you **must** keep re-requesting the **GET** until you get error or HTTP_OK, or the Logbook service will stay "in the middle of the stream" (we hope to remove this limitation in the future).
4. Fetch the data with **GET** to */Logbook/byId/<Id>/Data*. This returns also a bytestream (just like the */Logbook/Descriptors* above).
5. Convert the data using the converter tools or classes. (To Be Continued...)
The problem is basically the same for step 3 and 4. I receive a whiteboard::ByteStream object in the onGetResult callback function but I don't know how to get the correct offset information from it.
I've found a number of different methods seemingly concerning different aspects of number of bytes in ByteStream.h (length, fullSize, transmitted, payloadSize and serializationLength) but I just can't get it working properly.
Basically I would like to do something like this in onGetResult:
if (resultCode == whiteboard::HTTP_CODE_CONTINUE) {
const whiteboard::ByteStream &byteStream = rResultData.convertTo<const whiteboard::ByteStream &>();
currentEntryOffset += byteStream.length();
asyncGet(WB_RES::LOCAL::MEM_LOGBOOK_BYID_LOGID_DESCRIPTORS(), AsyncRequestOptions::Empty, currentEntryIdToFetch, currentEntryOffset);
return;
}
The basic idea is to do the same call again.
So if you do:
asyncGet(WB_RES::LOCAL::MEM_LOGBOOK_BYID_LOGID_DESCRIPTORS(),AsyncRequestOptions::Empty, currentEntryIdToFetch);
and get the response HTTP_CONTINUE, do:
asyncGet(WB_RES::LOCAL::MEM_LOGBOOK_BYID_LOGID_DESCRIPTORS(),AsyncRequestOptions::Empty, currentEntryIdToFetch);
Until you get HTTP_CONTINUE or an error.
If the result code is HTTP_CONTINUE, you must GET again with the parameter StartAfterId set to the Id of the last entry you received and you'll get the next entries.
Might be a bit cryptic but do another asyncGet to the exact same resource until you get HTTP_OK or an http error code.
Also, note that you need to decode the data, a python script can be found here in this answer

Reading from Socket Stream Blocking After Retrieval

I'm currently attempting to read an incoming message from a client socket, that, prior to the below procedure has already been connected to the server socket. The below procedure outputs the message, one character at a time, as it retrieves it from the stream.
The problem is that, when the stream is out of information, the call to Ada.Streams.Read is blocking, and stops the application flow completely. According to some examples, it would appear as though Offset should be set to 0 automatically at the end of the stream, but that never happens. Instead the application stops at the call to Read.
procedure Read_From (Channel : Sockets.Stream_Access) is
use Ada.Text_IO;
use Ada.Streams;
Data : Stream_Element_Array (1 .. 1);
Offset : Stream_Element_Offset;
begin
loop
Read (Channel.All, Data, Offset);
exit when Offset = 0;
Put (Character'Val (Data (1)));
end loop;
-- The application never reaches this point.
New_Line;
Put_Line ("Finished reading from client!");
end Read_From;
-- #param Channel `GNAT.Sockets.Stream (Client_Socket)`
I've also attempted the same process with GNAT.Sockets.Receive_Socket, but the same issue remains: the application flow is stopped completely, assumably awaiting further information from the stream, even though there is nothing more to retrieve.
Any pointers in the right direction would be highly appreciated!
Normally, you’d read a (binary) message from a stream knowing how much data needed to be read, so you could read until you’d got that much.
But, if you’re reading a text message from an externally-defined source, as it might be an HTTP request, there needs to be some terminator sequence so you can read character-by-character until you’ve read the terminator. In the case of an HTTP request, that’s a CR/LF/CR/LF sequence. Or it could be a null-terminated C string, in which case you’d be looking for the ASCII.NUL.
The Ada way to transfer variable-length text is to use String’Output/String’Input (see ARM 13.13.2(18)ff). What happens for a String (an array of Character) is that first the bounds are sent, then the content; on reception, the bounds are read, a String with those bounds is created, and the required number of bytes are read into the new String, which is then returned.
Basically that's how Ada streams work. The end of the stream only comes once you reach the final end of the stream, not just the current end of a buffer.
If you want to be able to interrupt reading, you have to use another representation of the connection than GNAT.Sockets.Stream_Access.

uvm register write is stuck and never return

I have some block of register along with corresponding register adaptor setup to translate into some bus protocol.
When I called the write method to one of my register, I could see the transaction going on, and driver complete its job, but write is stuck somewhere.
Please see excerpt of driver and sequence below:
// ...uvm driver
forever begin
seq_item_port.get_next_item(req);
$display("DEBUG A");
// ... do transaction
seq_item_port.item_done();
$display("DEBUG B");
end
// ... sequence
$display("START WRITE");
my_reg_block.my_reg1.write(
$display("DONE WRITE");
The result:
START WRITE
DEBUG A
DEBUG B
and then simulation stuck there - I never see DONE WRITE.
I am quite sure all the connect, set_sequencer has been made properly - otherwise my driver shouldn't see transaction in the first place. And this is pretty simple test - only doing that write.
Any idea why it is stuck in register write method eventhough the driver seems to have completed the transaction? I probably missed something.
In uvm_reg_map::do_bus_write(...) there's the following code snippet that handles the bus request for a register access:
bus_req.set_sequencer(sequencer);
rw.parent.start_item(bus_req,rw.prior);
if (rw.parent != null && i == 0)
rw.parent.mid_do(rw);
rw.parent.finish_item(bus_req);
bus_req.end_event.wait_on();
Notice the end_event.wait_on(). This event is normally triggered on a sequence item by the sequencer, once item_done() was called and finish_item() returns:
`ifndef UVM_DISABLE_AUTO_ITEM_RECORDING
sequencer.end_tr(item);
`endif
It's possible to turn this off using the define, which is what I guess is happening in your case.

Weird Winsock recv() slowdown

I'm writing a little VOIP app like Skype, which works quite good right now, but I've run into a very strange problem.
In one thread, I'm calling within a while(true) loop the winsock recv() function twice per run to get data from a socket.
The first call gets 2 bytes which will be casted into a (short) while the second call gets the rest of the message which looks like:
Complete Message: [2 Byte Header | Message, length determined by the 2Byte Header]
These packets are round about 49/sec which will be round about 3000bytes/sec.
The content of these packets is audio-data that gets converted into wave.
With ioctlsocket() I determine wether there is some data on the socket or not at each "message" I receive (2byte+data). If there's something on the socket right after I received a message within the while(true) loop of the thread, the message will be received, but thrown away to work against upstacking latency.
This concept works very well, but here's the problem:
While my VOIP program is running and when I parallely download (e.g. via browser) a file, there always gets too much data stacked on the socket, because while downloading, the recv() loop seems actually to slow down. This happens in every download/upload situation besides the actual voip up/download.
I don't know where this behaviour comes from, but when I actually cancel every up/download besides the voip traffic of my application, my apps works again perfectly.
If the program runs perfectly, the ioctlsocket() function writes 0 into the bytesLeft var, defined within the class where the receive function comes from.
Does somebody know where this comes from? I'll attach my receive function down below:
std::string D_SOCKETS::receive_message(){
recv(ClientSocket,(char*)&val,sizeof(val),MSG_WAITALL);
receivedBytes = recv(ClientSocket,buffer,val,MSG_WAITALL);
if (receivedBytes != val){
printf("SHORT: %d PAKET: %d ERROR: %d",val,receivedBytes,WSAGetLastError());
exit(128);
}
ioctlsocket(ClientSocket,FIONREAD,&bytesLeft);
cout<<"Bytes left on the Socket:"<<bytesLeft<<endl;
if(bytesLeft>20)
{
// message gets received, but ignored/thrown away to throw away
return std::string();
}
else
return std::string(buffer,receivedBytes);}
There is no need to use ioctlsocket() to discard data. That would indicate a bug in your protocol design. Assuming you are using TCP (you did not say), there should not be any left over data if your 2byte header is always accurate. After reading the 2byte header and then reading the specified number of bytes, the next bytes you receive after that constitute your next message and should not be discarded simply because it exists.
The fact that ioctlsocket() reports more bytes available means that you are receiving messages faster than you are reading them from the socket. Make your reading code run faster, don't throw away good data due to your slowness.
Your reading model is not efficient. Instead of reading 2 bytes, then X bytes, then 2 bytes, and so on, you should instead use a larger buffer to read more raw data from the socket at one time (use ioctlsocket() to know how many bytes are available, and then read at least that many bytes at one time and append them to the end of your buffer), and then parse as many complete messages are in the buffer before then reading more raw data from the socket again. The more data you can read at a time, the faster you can receive data.
To help speed up the code even more, don't process the messages inside the loop directly, either. Do the processing in another thread instead. Have the reading loop put complete messages in a queue and go back to reading, and then have a processing thread pull from the queue whenever messages are available for processing.