enclosing unit using virtual sequence - specman

I'm having the following problem:
I'm having a virtual sequence driver which type is top_sequence_driver_u.
In it's MAIN sequence I am doing a sequence (called s1) with keeping its driver to a bfm sequence driver which its type is another_sequence_driver_u.
Then in the s1 sequence i'm doing an item. So far so good.
The problem arise when i'm using the try_enclosing_unit() method.
I want to get the reference of type another_sequence_driver_u from that generated item but actually i am getting reference to the top_sequence_driver_u.
How could i do this thing, using the virtual sequence but that my item will have the another_sequence_driver_u reference rather than the virtual sequence type

If you want a reference to the driver a sequence was started on, you could just use the driver field that every sequence has.

the try_enclosing_unit() can return only the unit that instantiated this struct (or the struct containing it), and in this case - that's the virtual sequence driver.
why do you have to use the try_enclosing_unit() method?

Related

Spark serializes variable value as null instead of its real value

My understanding of the mechanics of Spark's code distribution toward the nodes running it is merely cursory, and I fail in having my code successfully run within Spark's mapPartitions API when I wish to instantiate a class for each partition, with an argument.
The code below worked perfectly, up until I evolved the class MyWorkerClass to require an argument:
val result : DataFrame =
inputDF.as[Foo].mapPartitions(sparkIterator => {
// (1) initialize heavy class instance once per partition
val workerClassInstance = MyWorkerClass(bar)
// (2) provide an iterator using a function from that class instance
new CloseableIteratorForSparkMapPartitions[Post, Post](sparkIterator, workerClassInstance.recordProcessFunc)
}
The code above worked perfectly well up to the point in time when I had (or chose) to add a constructor argument to my class MyWorkerClass. The passed argument value turns out as null in the worker, instead of the real value of bar. Somehow the serialization of the argument fails to work as intended.
How would you go about this?
Additional Thoughts/Comments
I'll avoid adding the bulky code of CloseableIteratorForSparkMapPartitions ― it merely provides a Spark friendly iterator and might even not be the most elegant implementation in that.
As I understand it, the constructor argument is not being correctly passed to the Spark worker due to how Spark captures state when serializing stuff to send for execution on the Spark worker. However instantiating the class does seamlessly make heavy-to-load assets included in that class ― normally available to the function provided on the last line of my above code; And the class did seem to instantiate per partition. Which is actually a valid if not key use case for using mapPartitions instead of map.
It's the passing of an argument to its instantiation, that I am having trouble figuring how to enable or work-around. In my case this argument is a value only known after the program started running (even if always invariant throughout a single execution of my job; it's actually a program argument). I do need it passing along for the initialization of the class.
I tried tinkering to solve, by providing a function which instantiates MyWorkerClass with its input argument, rather than directly instantiating as above, but this did not solve matters.
The root symptom of the problem is not any exception, but simply that the value of bar when MyWorkerClass is instantiated will just be null, instead of the actual value of bar which is known in the scope of the code enveloping the code snippet which I included above!
* one related old Spark issue discussion here

I am getting an error while trying to pass the data from scoreboard to sequence, how to get rid of it?

I am new to UVM and I am trying to verify a memory design where I am trying to run a write sequence multiple times followed by read sequence same number of times so that I could read the same addresses I am writing to, and compare. For this I tried to create a new class extended from uvm_object with a queue to store the addresses I am writing to, so that I could use them in read seq and I am instantiating this class in the scoreboard and then sending the handle of class to the read sequence via uvm_config_db, now the issue is I am able to store addresses in queue but unable to get the class handle in read sequence ......Is this the right way of checking or is there some better way to check the write and read back from memory, please help me !
entire code link (yet to complete): https://www.edaplayground.com/x/3iTr
Relevant code snippets:
This is the class I created to store the addresses
class address_list extends uvm_object;
reg[7:0]addr_q[$];
function new(string name);
super.new(name);
endfunction
endclass;
In my scoreboard, I am passing the handle of class with address queue to the read sequence, here is the snippet from scoreboard
virtual function void write(mem_seq_item pkt);
if(pkt.wr_en==1)
begin
pkt_qu_write.push_back(pkt);
addr.addr_q.push_back(pkt.addr);
uvm_config_db#(address_list)::set(uvm_root::get(),"*","address",addr);
end
if(pkt.rd_en==1)
pkt_qu_read.push_back(pkt);
`uvm_info(get_type_name(),$sformatf("Adder list is
%p",addr.addr_q),UVM_LOW)
endfunction : write
In my read sequence, I am trying to get the handle
virtual task body();
repeat(3)
`uvm_do(wr_seq)
if(!uvm_config_db#(address_list)::get(this, " ", "address", addr_))
`uvm_fatal("NO_VIF",{"virtual interface must be set for:",get_full_name(),".addr_"});
`uvm_info(get_type_name(),$sformatf("ADDR IS %p",addr_),UVM_LOW)
repeat(3)
`uvm_do(rd_seq)
endtask
Error-[ICTTFC] Incompatible complex type usage
mem_sequence.sv, 137 {line where i try to get from uvm_config_db}
Incompatible complex type usage in task or function call.
The following expression is incompatible with the formal parameter of the
function. The type of the actual is 'class $unit::wr_rd_sequence', while
the
type of the formal is 'class uvm_pkg::uvm_component'. Expression: this
Source info: uvm_config_db#
(_vcs_unit__3308544630::address_list)::get(this,
" ", "address", this.addr_)
There are two problems with this line:
if(!uvm_config_db#(address_list)::get(this, " ", "address", addr_))
One is causing your error. One might lead to you not being able to find what you're looking for in the database.
This (literally this) is causing your error. You are calling get from a class derived from uvm_sequence. The first argument to get is expecting a class derived from uvm_component. Your problem is that a sequence is not part of the testbench hierarchy, so you cannot use a sequence as the first argument to a call to get (or set) in a uvm_config_db. Instead the convention is to use the sequencer that the sequence is running on, which is returned by a call to the sequence's get_sequencer() method. This solves your problem:
if(!uvm_config_db#(address_list)::get(get_sequencer(), "", "address", addr_))
This works because you used a wildcard when you called set.
Notice that I also removed the space from between the quotes. That might not give you a problem, because you used the wildcard when you called set, but in general this string should either be empty or should be a real hierarchical path. (The hierarchy input to the set and get calls is split between the first argument - a SystemVerilog hierarchical path - and the second - a string representing a hierarchical path).
uvm_config_db is basically for passing configuration between components.
For purpose of passing data from scoreboard to sequence, you can use uvm_event.
Trigger event in scoreboard using event.trigger(address_list)
sequence wait for event using event.wait_for_trigger_data(address_list)

User defined postgresql types using Npgsql from F#

We use postgresql's features to the maximum to ease our development effort. We make heavy use of custom types (user defined types) in postgresql; most of our functions and stored procedures either take them as input parameters or return them.
We would like to make use of them from F#'s SqlDataProvider. That means we should somehow be able to tell F# how to map F# user type to postgresql user type. In other words
Postgresql has our defined user type post_user_defined
F# has our defined user type fsharp_user_defined
We should instruct Npgsql to somehow perform this mapping. My research so far points me to two approaches and none of them are completely clear to me. Any help is appreciated
Approach 1
NpgsqlTypes namespace has pre-defined set of postgresql types mapped to .NET out of box. Few of them are classes, others structures. Say I would like to use postgresql's built in type point which is mapped to .NET by Npgsql via NpgsqlPoint. I can map this to application specific data structure like this:
let point (x,y) = NpgsqlTypes.NpgsqlPoint(x,y)
(From PostgreSQLTests.fsx)
In this case, postgresql point and NpgsqlPoint (.NET) are already defined. Now I would like to do the same for my custom type.
Suppose the user defined postgresql composite is
create type product_t as ( name text, product_type text);
And the application data structure (F#) is the record
type product_f = {name :string; ptype :string }
or a tuple
type product_f = string * string
How do I tell Npgsql to make use of my type when passed as a parameter to postgresql functions/procedures? It looks like I will need to use NpgsqTypes.NpgsqlDbType.Composite or Npgsql.PostgresCompositeType which doesn't have a constructor that is public.
I am at a dead end here!
Approach 2
Taking cue from this post, I could create a custom type and register with MapCompositeGlobally and use it to pass to postgresql functions.So, here I try my hand at it
On Postgresql side, the type and functions are respectively
CREATE TYPE product_t AS
(name text,
product_type text)
and
func_product(p product_t) RETURNS void AS
And from my application in F#
type PgProductType(Name:string,ProductType:string)=
member this.Name = Name
member this.ProductType = ProductType
new() = PgProductType("","")
Npgsql.NpgsqlConnection.MapCompositeGlobally<PgProductType>("product_t",null)
and then
type Provider = SqlDataProvider
let ctx = Provider.GetDataContext()
let prd = new PgProductType("F#Product","")
ctx.Functions.FuncProduct.Invoke(prd);;
ctx.Functions.FuncIproduct.Invoke(prd);;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
stdin(29,1): error FS0501: The member or object constructor 'Invoke' takes 0 argument(s) but is here given 1. The requir
ed signature is 'SqlDataProvider<...>.dataContext.Functions.FuncIproduct.Result.Invoke() : Unit'.
Its strange to note that the error reports that : constructor 'Invoke' takes 0 argument(s) but is here given 1. F# side of things are completely blind to the argument that postgresql function takes. It does recognize that the function FuncIproduct exists but blind to the arguments it takes.
Regarding your 1st approach, as you've understood NpgsqlTypes contains some types which Npgsql supports out of the box - but these are only PostgreSQL built-in types. You cannot add a new type into there without changing Npgsql's source code, which isn't something you want to do.
Also, you should understand the difference between user-defined types (which PostgreSQL calls "composite") and totally independent types such as point. The latter are full types (similar to int4), with their own custom binary representation, while the former aren't.
Your 2nd approach is the right one - Npgsql comes with full support for PostgreSQL composite types. I have no idea how SqlDataProvider functions - I'm assuming this is an F#-specific type provider - but once you've properly mapped your composite via MapCompositeGlobally, Npgsql allows you to write it transparently by setting an NpgsqlParameter's Value to an instance of PgProductType. It may be worth trying to get it working with type providers first.

Automatic casting in Scala

I have a class that inherits the Actor trait. In my code, I have a method that creates x numbers of this actor using a loop and another method that simply sends the Finish message to all of them to tell them to terminate. I made the kill method just take an array of Actor since I want to be able to use it with an array of any type of Actor. For some reason, however, when I pass a value of type Array[Producer], where Producer extends Actor, to a method that accepts the type Array[Actor], I get a type error. Shouldn't Scala see that Producer is a type of Actor and automatically cast this?
What you are describing is called covariance, and it is a property of most of the collection classes in Scala--a collection of a subtype is a subtype of the collection of the supertype. However, since Array is a Java primitive array, it is not covariant--a collection of a subtype is simply different. (The situation is more complicated in 2.7 where it's almost a Java primitive array; in 2.8 Array is just a plain Java primitive array, since the 2.7 complications turned out to have unfortunate corner cases.)
If you try the same thing with an ArrayBuffer (from collection.mutable <- edit: this part is wrong, see comments) or a List (<- edit: this is true) or a Set (<- edit: no, Set is also invariant), you'll get the behavior you want. You could also create an Array[Actor] to begin with but always feed it Producer values.
If for some reason you really must use Array[Producer], you can still cast it using .asInstanceOf[Array[Actor]]. But I suggest using something other than primitive arrays--anything you could possibly be doing with actors will be far slower than the tiny overhead of using a more full-featured collection class.

How to workaround the XmlSerialization issue with type name aliases (in .NET)?

I have a collection of objects and for each object I have its type FullName and either its value (as string) or a subtree of inner objects details (again type FullName and value or a subtree).
For each root object I need to create a piece of XML that will be xml de-serializable.
The problem with XmlSerializer is that i.e. following object
int age = 33;
will be serialized to
<int>33</int>
At first this seems to be perfectly ok, however when working with reflection you will be using System.Int32 as type name and int is an alias and this
<System.Int32>33</System.Int32>
will not deserialize.
Now additional complexity comes from the fact that I need to handle any possible data type.
So solutions that utilize System.Activator.CreateInstance(..) and casting won't work unless I go down the path of code gen & compilation as a way of achieving this (which I would rather avoid)
Notes:
A quick research with .NET Reflector revealed that XmlSerializer uses internal class TypeScope, look at its static constructor to see that it initializes an inner HashTable with mappings.
So the question is what is the best (and I mean elegant and solid) way to workaround this sad fact?
I don't exactly see, where your problem originally stems from. XmlSerializer will use the same syntax/mapping for serializing as for deserializing, so there is no conflict when using it for both serializing and deserializing.
Probably the used type-tags are some xml-standard thing, but not sure about that.
I guess the problem is more your usage of reflection. Do you instantiate your
imported/deserialized objects by calling Activator.CreateInstance ?
I would recommend the following instead, if you have some type Foo to be created from the xml in xmlReader:
Foo DeserializedObject = (Foo)Serializer(typeof(Foo)).Deserialize(xmlReader);
Alternatively, if you don't want to switch to the XmlSerializer completely, you could do some preprocessing of your input. The standard way would then be to create some XSLT, in which you transform all those type-elements to their aliases or vice versa. then before processing the XML, you apply your transformation using System.Xml.Xsl.XslCompiledTransform and use your (or the reflector's) mappings for each type.
Why don't you serialize all the field's type as an attribute?
Instead of
<age>
<int>33</int>
</age>
you could do
<age type="System.Int32">
33
</age>