I have to update few enumerated data types which are declared inside a package and mine is a special scenario where the size of my enum data type will vary with a parameter value.
I have to make that parameter value somehow visible to the package.
I am aware that packages are not the components that can be instantiated. Hence I cannot pass the parameters directly.
Could anyone help me in getting my requirement done with help of some tweaks.
PS: The requirement is related to TB
What we usually do for types of lengths that have to be parameterized is use defines instead of package parameters:
package some_package_pkg;
`ifndef MAX_DATA_WIDTH
`define MAX_DATA_WIDTH 32
typedef [`MAX_DATA_WIDTH-1:0] bit some_type;
...
endpackage
By default, MAX_DATA_WIDTH is 32, but if we need a bigger width, we just pass the define from the command line. For Incisive it is something like:
irun -D MAX_DATA_WIDTH=64 some_package_pkg.sv
If you want to retrofit an existing package that uses a parameter you could do:
package some_param_package_pkg;
parameter P_MAX_DATA_WIDTH = `MAX_DATA_WIDTH; // just add this line
typedef [P_MAX_DATA_WIDTH-1:0] bit some_type; // all declaration are unchanged
endpackage
Related
I have known function typedefs in Dart for a long time. They are also explained in answers to this question.
Now, I have heard about non-function type aliases (or non-function typedefs) coming to Dart.
I am wondering two things:
What exactly are (non-function) typedefs in Dart?
How do I use them (in my Flutter project)?
Generalized type aliases / typedefs in Dart
You can view the feature specification for Generalized type alisases for the full design document.
I want to preface this by pointing out that Dart used to only support typedefs for functions. The new generalized feature supports typedefs for any type.
typedef JsonMap = Map<String, dynamic>;
JsonMap parseJsonMap(String input) => json.decode(input) as JsonMap;
This is especially useful when you have multiple generic types (type parameters) that cause long type names that are tedious to type, for example Map<ScaffoldFeatureController<SnackBar, SnackBarClosedReason>, SnackBar>. This can now be simplified using a type alias:
typedef ScaffoldSnackBarMap = Map<ScaffoldFeatureController<SnackBar, SnackBarClosedReason>, SnackBar>;
Syntax
If not clear from the above examples, this is the syntax for type alisases / typedefs:
'typedef' identifier typeParameters? '=' type ';'
This means that you always need to start with the typedef keyword followed by your desired identifier, e.g. FooTypeDef. After that, you can add type parameters, e.g. Foo<K, V>. The last step is adding the = symbol followed by the actual type you want to create an alias for. This can be any type, i.e. a class, primitive type, function type, or w/e. Do not forget the ; at the end ;)
// Type parameters / generic types in typedef.
typedef Foo<K, V> = Map<K, V>;
// Type alias for regular types.
typedef Bar = Widget;
// As well as primitive types.
typedef Baz = int;
// Function types are also supported.
typedef FooFunction<T, R> = R Function(T param);
Deprecating names
Additionally, you can use typedefs for any class names. Say you want to rename your class from Provider to Pod because you think the former is too verbose. If you are maintaining a package, this would be a breaking change. With the new generalized type aliases, you can simply rename your class and create a type alias that you deprecate:
class NewClassName<T> {}
#Deprecated("Use NewClassName instead")
typedef OldClassName<T> = NewClassName<T>;
Note that this example and the one above are taken from the proposed CHANGELOG entry for the feature.
How to use them
The feature will be shipped by default with Dart 2.13 but is currently still experimental. I will cover how to use it in both ways; the experimental method can be removed later on.
Dart 2.13
As I mentioned previously, the feature will be enabled by default starting with Dart 2.13. If you currently have Dart 2.13 installed already (you can use dart --version to check it for example), you can use this method. Otherwise, you should refer to the Experimental support section below.
In your pubspec.yaml, you need to define the lower bound on your Dart SDK constraint to be greater than or equal to 2.13.0:
environment:
dart: '>=2.13.0 <3.0.0'
Experimental support
In your Flutter project (or any other Dart project), you currently need to enable them as an experiment. It means that they are hidden behind a feature flag.
Experimental Dart features can be configured using analysis_options.yaml. You can simply create an analysis_options.yaml file in the root of your project directory and add the following lines:
analyzer:
enable-experiment:
- nonfunction-type-aliases
Now, you need to also enable the experiment when you run (or build) your app:
flutter run --enable-experiment=nonfunction-type-aliases
To make sure that you can use this feature, use the master channel (flutter channel master when using Flutter).
having tons of registers defined in my hardware, containing bit fields, I wanted to 'name' those registers and access in SystemVerilog the bit fields using their names instead of msb:lsb format. So, I made a new package, and declared constant parameters inside, and as well tried those which describe range. Something like this:
package VmeAddressMap;
parameter SYS_INTCONFIG = 32'h00000044;
parameter RSYS_INTCONFIGRORA = 31:16;
parameter RSYS_INTCONFIGENABLE = 15:0;
endpackage // VmeAddressMap
quite evidently, this does not work. So I came with a 'hybrid' solution, i.e. simple constants stay in package, and for ranges I made another file, which contains macros:
package file:
package VmeAddressMap;
parameter SYS_INTCONFIG = 32'h00000044;
endpackage // VmeAddressMap
macro file:
`define RSYS_INTCONFIGRORA 31:16
`define RSYS_INTCONFIGENABLE 15:0
this solution permits me to do things as follow (Read is a task reading data through VME bus):
Read(SYS_INTCONFIG);
`CHECK_EQUAL(LastVmeReadData_b32[`RSYS_INTCONFIGRORA], 15,
"IRQ setup invalid");
This works, and does what I want. However I don't like it. In particular mixing macros with SystemVerilog style of description.
Is there a way how to accomplish the same task directly in the package?
This is exactly what the UVM register abstraction layer does for you. You define fields giving them a name, bit width, and other attributes. Those fields are grouped into a registers, and registers are grouped into blocks with addresses and offsets.
Now that I've told you that, here's a simple solution that does what you are looking for using the let construct.
package VmeAddressMap;
parameter SYS_INTCONFIG = 32'h00000044;
let RSYS_INTCONFIGRORA(field) = field[31:16];
let RSYS_INTCONFIGENABLE(field) = field[15:0];
endpackage // VmeAddressMap
But now you have to put the range in front of the variable.
`CHECK_EQUAL(RSYS_INTCONFIGRORA(LastVmeReadData_b32), 15,
"IRQ setup invalid");
You can use one parameter for the MSB and another for the LSB.
parameter RSYS_INTCONFIGRORA_MSB = 31;
parameter RSYS_INTCONFIGRORA_LSB = 16;
LastVmeReadData_b32[RSYS_INTCONFIGRORA_MSB:RSYS_INTCONFIGRORA_LSB]
That is a bit unwieldy, so if everything is 16 bits wide you can just define the LSB:
parameter RSYS_INTCONFIGRORA = 16;
LastVmeReadData_b32[RSYS_INTCONFIGRORA +: 16]
Or, you can use a struct:
typedef struct packed {
logic [15:0] RSYS_INTCONFIGRORA;
logic [15:0] RSYS_INTCONFIGENABLE;
} some_register_t;
You can further make a union with that struct if some parts of the design need to interact with the whole register object and others with just the bit fields.
These register structs can be built up into a much larger register map struct.
If you are using UVM then you should be building up a RAL module of your registers
I have the following code in some e file:
<'
package my_package;
struct packet {
foo() is {
print "Hello";
};
};
'>
And my top file imports several files, including this one, and at some point it calls the foo() method.
Now, by mistake I added this code:
struct packet {};
in some other file (I just forgot that I already had a struct called “packet”), which is imported by top before the above file.
Strangely, when I tried to load the top file, I got this error:
*** Error: 'p' (of type main::packet) does not have 'foo()' method.
at line 9 in top.e
p.foo();
But why didn’t it fail already on the file that defines foo()?
It has a struct declaration for packet, but packet was already (mistakenly) declared in an earlier file, so why didn’t it give a duplicate type name error? Is it allowed to have two structs with the same name??
Actually, it's not that the main package takes precedence.
But when a type name is used in some file, the same package to which this file belongs, takes precedence.
In this case, the top.e file probably didn't have any "package" statement, so it also belonged to package main.
If top.e had "package my_package", then "packet" in it would resolve to my_package::packet (and not to main::packet), and there would be no error.
You are allowed to have the same name for different structs, but they must be defined in different packages. In your case you first define packet in the my_package package. I'm guessing the other code you added was in some other file that did not have the line package my_package; in it. This means you defined another struct called packet in the main package. This effectively means that you have two different types: my_package::struct and main::struct. In main::packet you didn't define any foo() function (as you can see also from the error message). As Yuti mentions, in your top.e file you probably don't have a package declared, so the main package takes precedence over any other package.
As an exercise, if you change your code in top.e to my_package::packet instead of simply packet it's going to work. You can anyway see something is wrong from the error message. You know you expected my_package::packet, but you were creating a main::packet.
Have a look in the Specman e Language Reference, section 28, Encapsulation Constructs for more info on packages.
Variables of struct declared by data type of language in the header file. Usually data type using to declare variables, but other data type pass to preprocessors. When we should use to a data type send to preprocessor for declare variables? Why data type and variables send to processor?
#define DECLARE_REFERENCE(type, name) \
union { type name; int64_t name##_; }
typedef struct _STRING
{
int32_t flags;
int32_t length;
DECLARE_REFERENCE(char*, identifier);
DECLARE_REFERENCE(uint8_t*, string);
DECLARE_REFERENCE(uint8_t*, mask);
DECLARE_REFERENCE(MATCH*, matches_list_head);
DECLARE_REFERENCE(MATCH*, matches_list_tail);
REGEXP re;
} STRING;
Why this code is doing this for declarations? Because as the body of DECLARE_REFERENCE shows, when a type and name are passed to this macro it does more than just the declaration - it builds something else out of the name as well, for some other unknown purpose. If you only wanted to declare a variable, you wouldn't do this - it does something distinct from simply declaring one variable.
What it actually does? The unions that the macro declares provide a second name for accessing the same space as a different type. In this case you can get at the references themselves, or also at an unconverted integer representation of their bit pattern. Assuming that int64_t is the same size as a pointer on the target, anyway.
Using a macro for this potentially serves several purposes I can think of off the bat:
Saves keystrokes
Makes the code more readable - but only to people who already know what the macros mean
If the secondary way of getting at reference data is only used for debugging purposes, it can be disabled easily for a release build, generating compiler errors on any surviving debug code
It enforces the secondary status of the access path, hiding it from people who just want to see what's contained in the struct and its formal interface
Should you do this? No. This does more than just declare variables, it also does something else, and that other thing is clearly specific to the gory internals of the rest of the containing program. Without seeing the rest of the program we may never fully understand the rest of what it does.
When you need to do something specific to the internals of your program, you'll (hopefully) know when it's time to invent your own thing-like-this (most likely never); but don't copy others.
So the overall lesson here is to identify places where people aren't writing in straightforward C, but are coding to their particular application, and to separate those two, and not take quirks from a specific program as guidelines for the language as a whole.
Sometimes it is necessary to have a number of declarations which are guaranteed to have some relationship to each other. Some simple kinds of relationships such as constants that need to be numbered consecutively can be handled using enum declarations, but some applications require more complex relationships that the compiler can't handle directly. For example, one might wish to have a set of enum values and a set of string literals and ensure that they remain in sync with each other. If one declares something like:
#define GENERATE_STATE_ENUM_LIST \
ENUM_LIST_ITEM(STATE_DEFAULT, "Default") \
ENUM_LIST_ITEM(STATE_INIT, "Initializing") \
ENUM_LIST_ITEM(STATE_READY, "Ready") \
ENUM_LIST_ITEM(STATE_SLEEPING, "Sleeping") \
ENUM_LIST_ITEM(STATE_REQ_SYNC, "Starting synchronization") \
// This line should be left blank except for this comment
Then code can use the GENERATE_STATE_ENUM_LIST macro both to declare an enum type and a string array, and ensure that even if items are added or removed from the list each string will match up with its proper enum value. By contrast, if the array and enum declarations were separate, adding a new state to one but not the other could cause the values to get "out of sync".
I'm not sure what the purpose the macros in your particular case, but the pattern can sometimes be a reasonable one. The biggest 'question' is whether it's better to (ab)use the C preprocessor so as to allow such relationships to be expressed in valid-but-ugly C code, or whether it would be better to use some other tool to take a list of states and would generate the appropriate C code from that.
I'm trying to understand a specific thing about ocaml modules and their compilation:
am I forced to redeclare types already declared in a .mli inside the specific .ml implementations?
Just to give an example:
(* foo.mli *)
type foobar = Bool of bool | Float of float | Int of int
(* foo.ml *)
type baz = foobar option
This, according to my normal way of thinking about interfaces/implementations, should be ok but it says
Error: Unbound type constructor foobar
while trying to compile with
ocamlc -c foo.mli
ocamlc -c foo.ml
Of course the error disappears if I declare foobar inside foo.ml too but it seems a complex way since I have to keep things synched on every change.
Is there a way to avoid this redundancy or I'm forced to redeclare types every time?
Thanks in advance
OCaml tries to force you to separate the interface (.mli) from the implementation (.ml. Most of the time, this is a good thing; for values, you publish the type in the interface, and keep the code in the implementation. You could say that OCaml is enforcing a certain amount of abstraction (interfaces must be published; no code in interfaces).
For types, very often, the implementation is the same as the interface: both state that the type has a particular representation (and perhaps that the type declaration is generative). Here, there can be no abstraction, because the implementer doesn't have any information about the type that he doesn't want to publish. (The exception is basically when you declare an abstract type.)
One way to look at it is that the interface already contains enough information to write the implementation. Given the interface type foobar = Bool of bool | Float of float | Int of int, there is only one possible implementation. So don't write an implementation!
A common idiom is to have a module that is dedicated to type declarations, and make it have only a .mli. Since types don't depend on values, this module typically comes in very early in the dependency chain. Most compilation tools cope well with this; for example ocamldep will do the right thing. (This is one advantage over having only a .ml.)
The limitation of this approach is when you also need a few module definitions here and there. (A typical example is defining a type foo, then an OrderedFoo : Map.OrderedType module with type t = foo, then a further type declaration involving'a Map.Make(OrderedFoo).t.) These can't be put in interface files. Sometimes it's acceptable to break down your definitions into several chunks, first a bunch of types (types1.mli), then a module (mod1.mli and mod1.ml), then more types (types2.mli). Other times (for example if the definitions are recursive) you have to live with either a .ml without a .mli or duplication.
Yes, you are forced to redeclare types. The only ways around it that I know of are
Don't use a .mli file; just expose everything with no interface. Terrible idea.
Use a literate-programming tool or other preprocessor to avoid duplicating the interface declarations in the One True Source. For large projects, we do this in my group.
For small projects, we just duplicate type declarations. And grumble about it.
You can let ocamlc generate the mli file for you from the ml file:
ocamlc -i some.ml > some.mli
In general, yes, you are required to duplicate the types.
You can work around this, however, with Camlp4 and the pa_macro syntax extension (findlib package: camlp4.macro). It defines, among other things, and INCLUDE construct. You can use it to factor the common type definitions out into a separate file and include that file in both the .ml and .mli files. I haven't seen this done in a deployed OCaml project, however, so I don't know that it would qualify as recommended practice, but it is possible.
The literate programming solution, however, is cleaner IMO.
No, in the mli file, just say "type foobar". This will work.