Creating a lookup table in CHISEL - scala

I am trying to create a lookup table in Chisel of width 72 bits and 1024 entries. These 1024 entries are stored separately in a file, which I read into my code. The code I have written so far is:
import Chisel._
import scala.io.Source._
module mdlNm {
// function to read entries from file 'omega_i.dat'
def fileRead() = {
val fileIn = fromFile("omega_i.dat").getLines.toList
val num = fileIn.map(i => BigInt(i, 16)) // converting the hexadecimal entries from string to BigInt
val uInt = num.map(i => UInt(i, width = 72)) // converting BigInt entries to UInt of width 72
ROM (uInt) // Chisel construct for creating an LUT for fixed entries
}
// The above LUT is later read as follows:
val in = Bits("h123") // Any 10-bit input to the LUT
val lutOut = fileRead().read(in) // Value read from the LUT
}
The above code throws up many errors of the form:
cppBackend//sinCos.cpp:2407:23: error: ‘T1785’ was not declared in this scope
{ T425.put(1018, 0, T1785[0]); T425.put(1018, 1, T1785[1]);}
^
cppBackend//sinCos.cpp:2408:23: error: ‘T1786’ was not declared in this scope
{ T425.put(1019, 0, T1786[0]); T425.put(1019, 1, T1786[1]);}
^
cppBackend//sinCos.cpp:2409:23: error: ‘T1787’ was not declared in this scope
{ T425.put(1020, 0, T1787[0]); T425.put(1020, 1, T1787[1]);}
^
cppBackend//sinCos.cpp:2410:23: error: ‘T1788’ was not declared in this scope
{ T425.put(1021, 0, T1788[0]); T425.put(1021, 1, T1788[1]);}
^
cppBackend//sinCos.cpp:2411:23: error: ‘T1789’ was not declared in this scope
{ T425.put(1022, 0, T1789[0]); T425.put(1022, 1, T1789[1]);}
^
cppBackend//sinCos.cpp:2412:23: error: ‘T1790’ was not declared in this scope
{ T425.put(1023, 0, T1790[0]); T425.put(1023, 1, T1790[1]);}
However, when I change the width of uInt to any number <= 64, no such issues arise and the code works properly.
Is there an alternative way to create an LUT of the size I specified above, in Chisel? Or am I doing something wrong in the above code?
Please help.

In chisel3,, the current version, this would be constructed a little bit differently. VecInit is used instead of ROM
I would recommend the creation of an intermediate value lut to hold the rom created by buildLookupTable
because each call to buildLookupTable would read the file again and create another rom.
import chisel3._
import chisel3.util._
import firrtl.FileUtils
class SomeModule extends MultiIOModule {
def buildLookupTable(): Vec[UInt] = {
VecInit(FileUtils.getLines("file1.dat").map { s => BigInt(s, 16).U })
}
val lut = buildLookupTable()
// The above LUT is later read as follows:
val in = 0x123.U // Any 10-bit input to the LUT
val lutOut = lut(in) // Value read from the LUT
// rest of module
...
}
I don't know what the problem with lengths you had but I have tested the above with UInts with widths of 500 and it works fine.

Related

AssemblyScript - Linear Nested Class Layout

I'm working on a linear data layout where components are alongside each other in memory. Things were going ok until I realized I don't have a way for making offsetof and changetype calls when dealing with nested classes.
For instance, this works as intended:
class Vec2{
x:u8
y:u8
}
const size = offsetof<Vec2>() // 2 -- ok
const ptr = heap.alloc(size)
changeType<Vec2>(ptr).x = 7 // memory = [7,0] -- ok
Naturally this approach fails when nesting classes
class Player{
position:Vec2
health:u8
}
const size = offsetof<Player>() //5 -- want 3, position is a pointer
const ptr = heap.alloc(size)
changeType<Player>(ptr).position.x = 7 //[0,0,0,0,0] -- want [7,0,0], instead accidentally changed pointer 0
The goal is for the memory layout to look like this:
| Player 1 | Player 2 | ...
| x y z h | x y z h |
Ideally I'd love to be able to create 'value-type' fields, or if this isnt a thing, are there alternative approaches?
I'm hoping to avoid extensive boilerplate whenever writing a new component, ie manual size calculation and doing a changetype for each field at its offset etc.
In case anybody is interested I'll post my current solution here. The implementation is a little messy but is certainly automatable using custom scripts or compiler transforms.
Goal: Create a linear proxy for the following class so that the main function behaves as expected:
class Foo {
position: Vec2
health: u8
}
export function main(): Info {
const ptr = heap.alloc(FooProxy.size)
const foo = changetype<FooProxy>(ptr)
foo.health = 3
foo.position.x = 9
foo.position.y = 10
}
Solution: calculate offsets and alignments for each field.
class TypeMetadataBase{
get align():u32{return 0}
get offset():u32{return 0}
}
class TypeMetadata<T> extends TypeMetadataBase{
get align():u32{return alignof<T>()}
get offset():u32{return offsetof<T>()}
constructor(){
super()
if(this.offset == 0)
throw new Error('offset shouldnt be zero, for primitive types use PrimitiveMetadata')
}
};
class PrimitiveMetadata<T> extends TypeMetadataBase{
get align():u32{return sizeof<T>()}
get offset():u32{return sizeof<T>()}
};
class LinearSchema{
metadatas:StaticArray<TypeMetadataBase>
size:u32
offsets:StaticArray<u32>
constructor(metadatas:StaticArray<TypeMetadataBase>){
let align:u32 = 0
const offsets = new StaticArray<u32>(metadatas.length)
for (let i = 0; i < metadatas.length; i++){
if(metadatas[i].align !== 0)
while(align % metadatas[i].align !== 0)
align++
offsets[i] = align
align += metadatas[i].offset
}
this.offsets = offsets
this.metadatas = metadatas
this.size = align
}
}
class Vec2 {
x: u8
y: u8
}
class FooSchema extends LinearSchema{
constructor(){
super([
new PrimitiveMetadata<u8>(),
new TypeMetadata<Vec2>(),
])
}
}
const schema = new FooSchema()
class FooProxy{
static get size():u32{return schema.size}
set health(value:u8){store<u8>(changetype<usize>(this) + schema.offsets[0],value)}
get health():u8{return load<u8>(changetype<usize>(this) + schema.offsets[0])}
get position():Vec2{return changetype<Vec2>(changetype<usize>(this) + schema.offsets[1])}
}

Apache Spark Data Generator Function on Databricks Not working

I am trying to execute the Data Generator function provided my Microsoft to test streaming data to Event Hubs.
Unfortunately, I keep on getting the error
Processing failure: No such file or directory
When I try and execute the function:
%scala
DummyDataGenerator.start(15)
Can someone take a look at the code and help decipher why I'm getting the error:
class DummyDataGenerator:
streamDirectory = "/FileStore/tables/flight"
None # suppress output
I'm not sure how the above cell gets called into the function DummyDataGenerator
%scala
import scala.util.Random
import java.io._
import java.time._
// Notebook #2 has to set this to 8, we are setting
// it to 200 to "restore" the default behavior.
spark.conf.set("spark.sql.shuffle.partitions", 200)
// Make the username available to all other languages.
// "WARNING: use of the "current" username is unpredictable
// when multiple users are collaborating and should be replaced
// with the notebook ID instead.
val username = com.databricks.logging.AttributionContext.current.tags(com.databricks.logging.BaseTagDefinitions.TAG_USER);
spark.conf.set("com.databricks.training.username", username)
object DummyDataGenerator extends Runnable {
var runner : Thread = null;
val className = getClass().getName()
val streamDirectory = s"dbfs:/tmp/$username/new-flights"
val airlines = Array( ("American", 0.17), ("Delta", 0.12), ("Frontier", 0.14), ("Hawaiian", 0.13), ("JetBlue", 0.15), ("United", 0.11), ("Southwest", 0.18) )
val reasons = Array("Air Carrier", "Extreme Weather", "National Aviation System", "Security", "Late Aircraft")
val rand = new Random(System.currentTimeMillis())
var maxDuration = 3 * 60 * 1000 // default to three minutes
def clean() {
System.out.println("Removing old files for dummy data generator.")
dbutils.fs.rm(streamDirectory, true)
if (dbutils.fs.mkdirs(streamDirectory) == false) {
throw new RuntimeException("Unable to create temp directory.")
}
}
def run() {
val date = LocalDate.now()
val start = System.currentTimeMillis()
while (System.currentTimeMillis() - start < maxDuration) {
try {
val dir = s"/dbfs/tmp/$username/new-flights"
val tempFile = File.createTempFile("flights-", "", new File(dir)).getAbsolutePath()+".csv"
val writer = new PrintWriter(tempFile)
for (airline <- airlines) {
val flightNumber = rand.nextInt(1000)+1000
val deptTime = rand.nextInt(10)+10
val departureTime = LocalDateTime.now().plusHours(-deptTime)
val (name, odds) = airline
val reason = Random.shuffle(reasons.toList).head
val test = rand.nextDouble()
val delay = if (test < odds)
rand.nextInt(60)+(30*odds)
else rand.nextInt(10)-5
println(s"- Flight #$flightNumber by $name at $departureTime delayed $delay minutes due to $reason")
writer.println(s""" "$flightNumber","$departureTime","$delay","$reason","$name" """.trim)
}
writer.close()
// wait a couple of seconds
//Thread.sleep(rand.nextInt(5000))
} catch {
case e: Exception => {
printf("* Processing failure: %s%n", e.getMessage())
return;
}
}
}
println("No more flights!")
}
def start(minutes:Int = 5) {
maxDuration = minutes * 60 * 1000
if (runner != null) {
println("Stopping dummy data generator.")
runner.interrupt();
runner.join();
}
println(s"Running dummy data generator for $minutes minutes.")
runner = new Thread(this);
runner.run();
}
def stop() {
start(0)
}
}
DummyDataGenerator.clean()
displayHTML("Imported streaming logic...") // suppress output
you should be able to use the Databricks Labs Data Generator on the Databricks community edition. I'm providing the instructions below:
Running Databricks Labs Data Generator on the community edition
The Databricks Labs Data Generator is a Pyspark library so the code to generate the data needs to be Python. But you should be able to create a view on the generated data and consume it from Scala if that's your preferred language.
You can install the framework on the Databricks community edition by creating a notebook with the cell
%pip install git+https://github.com/databrickslabs/dbldatagen
Once it's installed you can then use the library to define a data generation spec and by using build, generate a Spark dataframe on it.
The following example shows generation of batch data similar to the data set you are trying to generate. This should be placed in a separate notebook cell
Note - here we generate 10 million records to illustrate ability to create larger data sets. It can be used to generate datasets much larger than that
%python
num_rows = 10 * 1000000 # number of rows to generate
num_partitions = 8 # number of Spark dataframe partitions
delay_reasons = ["Air Carrier", "Extreme Weather", "National Aviation System", "Security", "Late Aircraft"]
# will have implied column `id` for ordinal of row
flightdata_defn = (dg.DataGenerator(spark, name="flight_delay_data", rows=num_rows, partitions=num_partitions)
.withColumn("flightNumber", "int", minValue=1000, uniqueValues=10000, random=True)
.withColumn("airline", "string", minValue=1, maxValue=500, prefix="airline", random=True, distribution="normal")
.withColumn("original_departure", "timestamp", begin="2020-01-01 01:00:00", end="2020-12-31 23:59:00", interval="1 minute", random=True)
.withColumn("delay_minutes", "int", minValue=20, maxValue=600, distribution=dg.distributions.Gamma(1.0, 2.0))
.withColumn("delayed_departure", "timestamp", expr="cast(original_departure as bigint) + (delay_minutes * 60) ", baseColumn=["original_departure", "delay_minutes"])
.withColumn("reason", "string", values=delay_reasons, random=True)
)
df_flight_data = flightdata_defn.build()
display(df_flight_data)
You can find information on how to generate streaming data in the online documentation at https://databrickslabs.github.io/dbldatagen/public_docs/using_streaming_data.html
You can create a named temporary view over the data so that you can access it from SQL or Scala using one of two methods:
1: use createOrReplaceTempView
df_flight_data.createOrReplaceTempView("delays")
2: use options for build. In this case the name passed to the Data Instance initializer will be the name of the view
i.e
df_flight_data = flightdata_defn.build(withTempView=True)
This code will not work on the community edition because of this line:
val dir = s"/dbfs/tmp/$username/new-flights"
as there is no DBFS fuse on Databricks community edition (it's supported only on full Databricks). It's potentially possible to make it working by:
Changing that directory to local directory, like, /tmp or something like
adding a code (after writer.close()) to list flights-* files in that local directory, and using dbutils.fs.mv to move them into streamDirectory

Counting length of repetition in macro

I'm trying to implement a macro to allow MATLAB-esque matrix creation. I've got a basic working macro but I still have a long way to go.
I want to be able to enforce the right structure (same number of elements in each row) but I'm not sure how to do this within the macro. I think I want to enforce that each internal repetition has the same length - is this something I can do?
Here is my code so far:
pub struct Matrix<T> {
pub cols: usize,
pub rows: usize,
pub data: Vec<T>
}
macro_rules! mat {
( $($( $x:expr ),*);* ) => {
{
let mut vec = Vec::new();
let mut rows = 0;
$(
$(
vec.push($x);
)*
rows += 1;
)*
Matrix { cols : vec.len()/rows, rows: rows, data: vec}
}
};
}
It works but as you can see isn't very safe. It has no restrictions on the structure.
I want to do a lot more with this macro but I think this is a good start!
Update:
Here is some playground code for a crappy implementation I worked out. If anyone has any better suggestions please let me know! Otherwise I'll close this myself.
macro_rules! count {
() => (0usize);
( $x:tt $($xs:tt)* ) => (1usize + count!($($xs)*));
}
macro_rules! mat {
( $( $x:expr ),* ) => { {
let vec = vec![$($x),*];
Matrix { cols : vec.len(), rows: 1, data: vec }
} };
( $( $x0:expr ),* ; $($( $x:expr ),*);* ) => { {
let mut _assert_width0 = [(); count!($($x0)*)];
let mut vec = Vec::new();
let rows = 1usize;
let cols = count!($($x0)*);
$( vec.push($x0); )*
$(
let rows = rows + 1usize;
let _assert_width = [(); count!($($x)*)];
_assert_width0 = _assert_width;
$( vec.push($x); )*
)*
Matrix { cols : cols, rows: rows, data: vec }
} }
}
playground
The count! macro expands to a constant expression that represents the number of arguments it got as input. It's just a helper for the mat! macro. If you need to count a lot of items and the compiler can't cope with it, see the Counting chapter in The Little Book of Rust Macros, which has more complex macros for counting.
My version of the macro uses dummy variables and assignments to verify that the width of all rows are the same. First off, I changed the macro's pattern to handle the first row separately from the subsequent rows. The first variable, _assert_width0, is initialized with an array of units ((), which makes the array take no memory), with the size of the array being the number of items in the first row. Then, _assert_width is also initialized with an array of units, with the size of the array being the number of items in each subsequent row. Then, _assert_width is assigned to _assert_width0. The magic here is that this line will raise a compiler error if the width of a row doesn't match the width of the first row, since the types of the array won't match (you might have e.g. [(); 3] and [(); 4]). The error isn't super clear if you don't know what's going on in the macro, though:
<anon>:38:24: 38:37 error: mismatched types:
expected `[(); 3]`,
found `[(); 4]`
(expected an array with a fixed size of 3 elements,
found one with 4 elements) [E0308]
<anon>:38 _assert_width0 = _assert_width;
^~~~~~~~~~~~~
<anon>:47:13: 47:44 note: in this expansion of mat! (defined in <anon>)
<anon>:38:24: 38:37 help: see the detailed explanation for E0308
First, to quickly address the title of your question: see the Counting chapter in The Little Book of Rust Macros. To summarise: there is no direct way, you need to write a macro that expands to something you can count in regular code.
Now, to address your actual question: hoo boy.
It's not so much counting that you want, it's to fail at compile time if the sub-sequences have different lengths.
First of all, there's no clean way to trigger a compilation failure from a macro. You can trigger some other pre-existing error, but you can't control the actual error message.
Secondly, there's no easy way to do "variable" comparisons in macros at all. You can sometimes compare against a fixed token sequence, but you're not doing that here.
So it's doubly not-really-doable.
The simplest thing to do is check the lengths during construction at runtime, and return an error or panic if they don't match.
Is it actually impossible? I don't believe so. If you're willing to accept inscrutable error messages and a massive jump in complexity, you can check for length equality between two token sequences like so:
macro_rules! tts_equal_len {
(($_lhs:tt $($lhs_tail:tt)*), ($_rhs:tt $($rhs_tail:tt)*)) => {
tts_equal_len!(($($lhs_tail)*), ($($rhs_tail)*))
};
(($($_lhs_tail:tt)+), ()) => { do_something_bad!() };
((), ($($_rhs_tail:tt)+)) => { do_something_bad!() };
((), ()) => { do_something_good!() };
}
macro_rules! do_something_bad { () => { { println!("kaboom!") } } }
macro_rules! do_something_good { () => { { println!("no kaboom!") } } }
fn main() {
tts_equal_len!((,,,), (,,,));
tts_equal_len!((,,,), (,,));
tts_equal_len!((,), (,,));
}
Again, the real problem is finding some way to fail at compile time such that the user will understand why compilation failed.
Update: there's a new way of doing things
As of the day on which this was written, the feature of rust which enables the following (count) to be done, in still unstable and is available in nightly builds.
You can check out the github issues and test cases for further understanding of what's given below
To enable this feature, you need to add the line #![feature(macro_metavar_expr)] to the top of the crate's root module (usually main.rs or lib.rs), and also set your repo to use nightly builds, which is easily done by creating a file rust-toolchain.toml in the root directory (alongside Cargo.toml) and add the folllowing lines to it:
[toolchain]
channel = "nightly"
Now, instead of providing a solution to you specific problem, I'd like to share a generic solution I created to better illustrate most situations.
I highly recommend studying the code AND the comments, by pasting the following two code blocks in a file (main.rs).
The macro_rules
#[derive(Eq, PartialEq, Debug, Copy, Clone)]
struct SumLen {
sum: i32,
len: u32
}
/// currently one `i32` type is available
///
/// # Examples
///
/// The output of the following:
/// ```ignore
/// sumnarr!(i32 => 5 ; 6, 7, 8)
/// ```
/// will be `[(5, 1), (21, 3)]`
macro_rules! sumnarr {
( $type:ty => $( $( $x: expr ),* );* ) => {
{
// `${count(x,0)}` will give you "length" (number of iterations)
// in `$( )*` loop that you are IMMEDIATELY OUTSIDE OF (e.g.: the `$( )*` loop below)
// `${count(x,1)}` will give you TOTAL number of iterations that the `$( )*` loop
// INSIDE of the IMMEDIATE `$( )*` loop will make. i.e. it is similar to converting
// [ [i1,i2], [i1,i2,i3] ] TO [ i1,i2,i3,i4,i5 ] i.e. flatten the nested iteration.
// In case of `[ [i1,i2], [i1,i2,i3] ]`, `${count(x,0)}` is 2 and `${count(x,1)}` is 5
let mut arr: [SumLen; ${count(x,0)}] = [SumLen{ sum:0, len:0}; ${count(x,0)}];
$(
// `${index()}` refers to the iteration number within the `$( )*` loop
arr[${index()}] = {
let mut sum = 0;
//let mut len = 0;
// THe following will give us length is the loop it is IMMEDIATELY OUTSIDE OF
// (the one just below)
let len = ${count(x,0)};
$(
sum += $x;
// If you were NOT using `$x` somewhere else inside `$( )*`,
// then you should use `${ignore(x)};` to inform the compiler
//You could use the below method, where `${length()}` will give you
//"length" or "number of iterations" in current loop that you are in
// OR
// you could go with my method of `${count(x,0)}` which is explained above
//len = ${length()};
)*
SumLen {
sum,
len
}
};
)*
arr
}
};
}
The #[test] (unit test)
#[test]
fn sumnarr_macro() {
let (a, b, c, d, e) = (4, 5, 6, 9, 10);
let sum_lens = [
SumLen {
sum: a + e,
len: 2
},
SumLen {
sum: b + c + d,
len: 3
}
];
assert_eq!(sum_lens, sumnarr!(i32 => a,e;b,c,d));
}
I hope this helps

java heap space error when converting csv to json but no error with d3.csv()

Platform being used: Apache Zeppelin
Language: scala, javascript
I use d3js to read a csv file of size ~40MB and it works perfectly fine with the below code:
<script type="text/javascript">
d3.csv("test.csv", function(data) {
// data is JSON array. Do something with data;
console.log(data);
});
</script>
Now, the idea is to avoid d3js, instead, construct the JSONarray in scala and access this variable in javascript code through z.angularBind(). Both of the below code works for smaller files, but gives java heap space error for the CSV file of size 40MB. What I am unable to understand is when d3.csv() can perfectly do the job without any heap space error, why cannot these 2 below code?
Edited Code 1: Using scala's
import java.io.BufferedReader;
import java.io.FileReader;
import org.json._
import scala.io.Source
var br = new BufferedReader(new FileReader("/root/test.csv"))
var contentLine = br.readLine();
var keys = contentLine.split(",")
contentLine = br.readLine();
var ja = new JSONArray();
while (contentLine != null) {
var splits = contentLine.split(",")
var i = 0
var jo = new JSONObject()
for(i <- 0 to splits.length-1){
jo.put(keys(i), splits(i));
}
ja.put(jo);
contentLine = br.readLine();
}
//z.angularBind("ja",ja.toString()) //ja can be accessed now in javascript (EDITED-10/11/15)
Edited Code 2:
I thought the heap space issue might go away if I use Apache spark to construct the JSON array like in below code, but this one too gives heap space error:
def myf(keys: Array[String], value: String):String = {
var splits = value.split(",")
var jo = new JSONObject()
for(i <- 0 to splits.length-1){
jo.put(keys(i), splits(i));
}
return(jo.toString())
}
val csv = sc.textFile("/root/test.csv")
val firstrow = csv.first
val header = firstrow.split(",")
val data = csv.filter(x => x != firstrow)
var g = data.map(value => myf(header,value)).collect()
// EDITED BELOW 2 LINES-10/11/15
//var ja= g.mkString("[", ",", "]")
//z.angularBind("ja",ja) //ja can be accessed now in javascript
You are creating JSON-objects. They are not native to java/scala and will therefore take up more space in that environment. What does z.angularBind() really do?
Also what is the heap size of your javascript environment (see https://www.quora.com/What-is-the-maximum-size-of-a-JavaScript-object-in-browser-memory for chrome) and your java environment (see How is the default java heap size determined?).
Update: Removed the original part of the answer where I misunderstood the question

specman: Assign multiple struct member in one expression

Hy,
I expanding an existing specman test where some code like this appears:
struct dataset {
!register : int (bits:16);
... other members
}
...
data : list of dataset;
foo : dataset;
gen foo;
foo.register = 0xfe;
... assign other foo members ...
data.push(foo.copy());
is there a way to assign to the members of the struct in one line? like:
foo = { 0xff, ... };
I currently can't think of a direct way of setting all members as you want, but there is a way to initialize variables (I'm not sure if it works on struct members as well). Anyway something like the following may fit for you:
myfunc() is {
var foo : dataset = new dataset with {
.register = 0xff;
.bar = 0xfa;
}
data.push(foo.copy());
}
You can find more information about new with help new struct from the specman prompt.
Hope it helps!
the simple beuty of assigning fields by name is one language feature i've always found usefull , safe to code and readable.
this is how i'd go about it:
struct s {
a : int;
b : string;
c : bit;
};
extend sys {
ex() is {
var s := new s with {.a = 0x0; .b = "zero"; .c = 0;};
};
run() is also {
var s;
gen s keeping {.a == 0x0; .b == "zero"; .c == 0;};
};
};
i even do data.push(new dataset with {.reg = 0xff; bar = 0x0;}); but you may raise the readablity flag if you want.
warning: using unpack() is perfectly correct (see ross's answer), however error prone IMO. i recommend to verify (with code that actually runs) every place you opt to use unpack().
You can directly use the pack and unpack facility of Specman with "physical fields" ( those instance members prefixed with the modifier %).
Example:
define FLOODLES_WIDTH 47;
type floodles_t : uint(bits:FLOODLES_WIDTH);
define FLABNICKERS_WIDTH 28;
type flabnickers_t : uint(bits:FLABNICKERS_WIDTH);
struct foo_s {
%!floodle : floodles_t;
%!flabnicker : flabnickers_t;
};
extend sys {
run() is also {
var f : foo_s = new;
unpack(packing.low,64'hdeadbeefdeadbeef,f);
print f;
unpack(packing.low,64'hacedacedacedaced,f);
print f;
};
setup() is also {
set_config(print,radix,hex);
};
};
When this run, it prints:
Loading /nfs/pdx/home/rbroger1/tmp.e ...
read...parse...update...patch...h code...code...clean...
Doing setup ...
Generating the test using seed 1...
Starting the test ...
Running the test ...
f = foo_s-#0: foo_s of unit: sys
---------------------------------------------- #tmp
0 !%floodle: 0x3eefdeadbeef
1 !%flabnicker: 0x001bd5b
f = foo_s-#0: foo_s of unit: sys
---------------------------------------------- #tmp
0 !%floodle: 0x2cedacedaced
1 !%flabnicker: 0x00159db
Look up packing, unpacking, physical fields, packing.low, packing.high in your Specman docs.
You can still use physical fields even if the struct doesn't map to the DUT. If your struct is already using physical fields for some other purpose then you'll need to pursue some sort of set* method for that struct.