Why this Modelsim error? "Ambiguous types in signal assignment statement." - variable-assignment

I am trying to compile the following example with ModelSim Microsemi 10.2c:
architecture example of assignment_to_an_aggregate is
type vowel_type is (a, e, i, o, u);
type consonant_type is (b, c, d, f, g);
signal my_vowel: vowel_type;
signal my_consonant: consonant_type;
begin
(my_vowel, my_consonant) <= (a, b);
end;
And it gives the following error:
** Error: assignment_to_aggregates.vhdl(40): (vcom-1349) Ambiguous types in signal assignment statement.
Possible target types are:
std.STANDARD.TIME_VECTOR
std.STANDARD.REAL_VECTOR
std.STANDARD.INTEGER_VECTOR
std.STANDARD.BIT_VECTOR
std.STANDARD.BOOLEAN_VECTOR
std.STANDARD.STRING
** Error: assignment_to_aggregates.vhdl(57): VHDL Compiler exiting
Anyone could explain whit this doesn't work? And why would the compiler think that TIME_VECTOR, STRING, etc. are reasonable types for the target of this assignment? Note: I get the same error even when the target aggregate has only signals of a same type.
Thanks!

While I can't comment on Modelsim's peculiarities of type messages, the problem is the type of the right hand side aggregate can't be discerned from the context.
Try:
entity assignment_to_an_aggregate is
end entity;
architecture example of assignment_to_an_aggregate is
type vowel_type is (a, e, i, o, u);
type consonant_type is (b, c, d, f, g);
type vowel_consonant is
record
vowel: vowel_type;
consonant: consonant_type;
end record;
signal my_vowel: vowel_type;
signal my_consonant: consonant_type;
begin
(my_vowel, my_consonant) <= vowel_consonant'(a,b);
end;
And you'll note that the left hand side aggregate depends on the right hand side expression for it's type.
(And a type declaration bears no simulation nor synthesis overhead burden, and no where is there a named object declared as a record type).
Ok, so if I understand correctly, then everything that appears on the right hand side of an assignment needs to have a explicitly defined type? Or to put it another way, every aggregate must have a defined type? – VHDL Addict 5 mins ago
No. In this case the target of a signal assignment is an aggregate:
IEEE Std 1076-1993, 8.4 Signal assignment statement (-2008, 10.5/10.5.2.1):
If the target of the signal assignment statement is in the form of an
aggregate, then the type of the aggregate must be determinable from
the context, excluding the aggregate itself but including the fact
that the type of the aggregate must be a composite type.
What else is there for context besides the right hand side? You can't make the aggregate on the left hand side a qualified expression. The target of a signal assignment must be named and the expression has no name.
The Modelsim error message you got didn't specify which side of the assignment statement it was complaining about while some other tool might be more enlightening:
assignment_to_aggregate.vhdl:23:19: type of waveform is unknown, use type qualifier
You ever notice error messages are intended for someone who doesn't need them?

Related

PureScript - Inferred Type Causes Compiler Warning

Consider the following simple snippet of PureScript code
a :: Int
a = 5
b :: Int
b = 7
c = a + b
main ∷ Effect Unit
main = do
logShow c
The program successfully infers the type of C to be Int, and outputs the expected result:
12
However, it also produces this warning:
No type declaration was provided for the top-level declaration of c.
It is good practice to provide type declarations as a form of documentation.
The inferred type of c was:
Int
in value declaration c
I find this confusing, since I would expect the Int type for C to be safely inferred. Like it often says in the docs, "why derive types when the compiler can do it for you?" This seems like a textbook example of the simplest and most basic type inference.
Is this warning expected? Is there a standard configuration that would suppress it?
Does this warning indicate that every variable should in fact be explicitly typed?
In most cases, and certainly in the simplest cases, the types can be inferred unambiguously, and indeed, in those cases type signatures are not necessary at all. This is why simpler languages, such as F#, Ocaml, or Elm, do not require type signatures.
But PureScript (and Haskell) has much more complicated cases too. Constrained types are one. Higher-rank types are another. It's a whole mess. Don't get me wrong, I love me some high-power type system, but the sad truth is, type inference works ambiguously with all of that stuff a lot of the time, and sometimes doesn't work at all.
In practice, even when type inference does work, it turns out that its results may be wildly different from what the developer intuitively expects, leading to very hard to debug issues. I mean, type errors in PureScript can be super vexing as it is, but imagine that happening across multiple top-level definitions, across multiple modules, even perhaps across multiple libraries. A nightmare!
So over the years a consensus has formed that overall it's better to have all the top-level definitions explicitly typed, even when it's super obvious. It makes the program much more understandable and puts constraints on the typechecker, providing it with "anchor points" of sorts, so it doesn't go wild.
But since it's not a hard requirement (most of the time), it's just a warning, not an error. You can ignore it if you wish, but do that at your own peril.
Now, another part of your question is whether every variable should be explicitly typed, - and the answer is "no".
As a rule, every top-level binding should be explicitly typed (and that's where you get a warning), but local bindings (i.e. let and where) don't have to, unless you need to clarify something that the compiler can't infer.
Moreover, in PureScript (and modern Haskell), local bindings are actually "monomorphised" - that's a fancy term basically meaning they can't be generic unless explicitly specified. This solves the problem of all the ambiguous type inference, while still working intuitively most of the time.
You can notice the difference with the following example:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s x = show x
On the second line s a <> s b you get an error saying "Could not match type b with type a"
This happens because the where-bound function s has been monomorphised, - meaning it's not generic, - and its type has been inferred to be a -> String based on the s a usage. And this means that s b usage is ill-typed.
This can be fixed by giving s an explicit type signature:
f :: forall a b. Show a => Show b => a -> b -> String
f a b = s a <> s b
where
s :: forall x. Show x => x -> String
s x = show x
Now it's explicitly specified as generic, so it can be used with both a and b parameters.

Why does Haskell says this is ambiguous?

I have a type class defined like this :
class Repo e ne | ne -> e, e -> ne where
eTable :: Table (Relation e)
And when I try to compile it I get this :
* Couldn't match type `Database.Selda.Generic.Rel
(GHC.Generics.Rep e0)'
with `Database.Selda.Generic.Rel (GHC.Generics.Rep e)'
Expected type: Table (Relation e)
Actual type: Table (Relation e0)
NB: `Database.Selda.Generic.Rel' is a type function, and may not be injective
The type variable `e0' is ambiguous
* In the ambiguity check for `eTable'
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
When checking the class method:
eTable :: forall e ne. Repo e ne => Table (Relation e)
In the class declaration for `Repo'
|
41 | eTable :: Table (Relation e)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I was expecting everything to be unambiguous since I've explicitly stated that e determines ne and vice versa.
However, if I try to define my class like this just for the testing purposes, it compiles :
data Test a = Test a
class Repo e ne | ne -> e, e -> ne where
eTable :: Maybe (Test e)
I'm not quite sure what is the deal with Table and Relation types that causes this.
Test is injective, since it is a type constructor.
Relation is not injective, since it is a type family.
Hence the ambiguity.
Silly example:
type instance Relation Bool = ()
type instance Relation String = ()
instance Repo Bool Ne where
eTable :: Table ()
eTable = someEtable1
instance Repo String Ne where
eTable :: Table ()
eTable = someEtable2
Now, what is eTable :: Table () ? It could be the one from the first or the second instance. It is ambiguous since Relation is not injective.
The source of the ambiguity actually has nothing to do with ne not being used in the class (which you headed off by using functional dependencies).
The key part of the error message is:
Expected type: Table (Relation e)
Actual type: Table (Relation e0)
NB: `Database.Selda.Generic.Rel' is a type function, and may not be injective
Note that it's the e that it's having trouble matching up, and the NB message drawing your attention to the issue of type functions and injectivity (you really have to know what that all means for the message to be useful, but it has all the terms you need to look up to understand what's going on, so it's quite good as programming error messages go).
The issue it's complaining about is a key difference between type constructors and type families. Type constructors are always injective, while type functions in general (and type families in particular) do not have to be.
In standard Haskell with no extensions, the only way you can build compound type expressions was using type constructors, such as the left-hand side Test in your data Test a = Test a. I can apply Test (of kind * -> *) to a type like Int (of kind *) to get a type Test Int (of kind *). Type constructors are injective, which means for any two distinct types a and b, Test a is a distinct type from Test b1. This means that when type checking you can "run them backwards"; if I've got two types t1 and t2 that are each the result of applying Test, and I know that t1 and t2 are supposed to be equal, then I can "unapply" Test to get the argument types and check whether those are equal (or infer what one of them is if it was something I hadn't figured out yet and the other is known, or etc).
Type families (or any other form of type function that isn't known to be injective) don't provide us that guarantee. If I have two types t1 and t2 that are supposed to be equal, and they're both the result of applying some TypeFamily, there's no way to go from the resulting types to the types that TypeFamily was applied to. And in particular, there's no way to conclude from the fact that TypeFamily a and TypeFamily b are equal that a and b are equal as well; the type family might just happen to map two distinct types a and b to the same result (the definition of injectivitiy is that it doesn't do that). So if I knew which type a was but didn't know b, knowing that TypeFamily a and TypeFamily b are equal doesn't give me any more information about what type b should be.
Unfortunately, since standard Haskell only has type constructors, Haskell programmers get well-trained to just presume that the type checker can work backwards through compound types to connect up the components. We often don't even notice that the type checker needs to work backwards, we're so used to just looking at type expressions with similar structure and leaping to the obvious conclusions without working through all the steps that the type checker has to go through. But because type checking is based on working out the type of every expression both bottom-up2 and top-down3 and confirming that they are consistent, type checking expressions whose types involve type families can easily run into ambiguity problems where it looks "obviously" unambiguous to us humans.
In your Repo example, consider how the type checker will deal with a position where you use eTable, with (Int, Bool) for e, say. The top-down view will see that it's used in a context where some Table (Relation (Int, Bool)) is required. It'll compute what Relation (Int, Bool) evaluates to: say it's Set Int, so we need Table (Set Int). The bottom-up pass just says eTable can be Table (Relation e) for any e.
All of our experience with standard Haskell tells us that this is obvious, we just instantiate e to (Int, Bool), Relation (Int, Bool) evaluates to Set Int again and we're done. But that's not actually how it works. Because Relation isn't injective there could be some other choice for e for which gives us Set Int for Relation e: perhaps Int. But if we choose e to be (Int, Bool) or Int we need to look for two different Repo instances, which will have different implementations for eTable, even though their type is the same.
Even adding a type annotation every time you use eTable like eTable :: Table (Relation (Int, Bool)) doesn't help. The type annotation only adds extra information to the top-down view, which we often already have anyway. The type-checker is still stuck with the problem that there could be (whether or not there actually are) other choices of e than (Int, Bool) which lead to eTable matching that type annotation, so it doesn't know which instance to use. Any possible use of eTable will have this problem, so it gets reported as an error when you're defining the class. It's basically for the same reason you get problems when you have a class with some members whose types don't use all of the type variables in the class head; you have to consider "variable only used under a type family" as much the same as "variable isn't used at all".
You could address this by adding a Proxy argument to eTable so that there's something fixing the type variable e that the type checker can "run backwards". So eTable :: Proxy e -> Table (Relation e).
Alternatively, with the TypeApplications extension you now can do as the error message suggests and turn on AllowAmbiguousTypes to get the class accepted, and then use things like eTable #(Int, Bool) to tell the compiler which choice for e you want. The reason this works where the type annotation eTable :: Table (Relation (Int, Bool)) doesn't work is the type annotation is extra information added to the context when the type checker is looking top-down, but the type application adds extra information when the type checker is looking bottom-up. Instead of "this expression is required to have a type that unifies with this type" it's "this polymorphic function is instantiated at this type".
1 Type constructors are actually even more restricted than just injectivity; applying Test to any type a results in a type with known structure Test a, so the entire universe of Haskell types is straightforwardly mirrored in types of the form Test t. A more general injective type function could instead do more "rearranging", such as mapping Int to Bool so long as it didn't also map Bool to Bool.
2 From the type produced by combining the sub-parts of the expression
3 From the type required of the context in which it is used

Eiffel: Covariant illegal types passed as arguments?

(emphasis mine)
Covariant redefinition of fields and functions provides no problems, but
covariant redefinition of arguments does create a problem that illegal
types can be passed as arguments.
But, if redefining field and function types covariantly causes no problems, then
how come redefining an argument's type covariantly can cause trouble?
Covariant redefinition equals subtyping, right? And subtypes can take the place of their supertypes!
What's the catch?
The issue is not with covariance itself. (In particular, if it were contra-variance, Design-by-Contract would be impossible, because argument types in the features of descendant classes would not necessary have features available in their parents. With covariance there is no such a problem.)
Problematic is a combination of covariance with polymorphism. E.g.
class A feature
foo (a: A) do a.bar end -- (1)
bar do end
end
class B inherit A redefine foo end feature
foo (a: B) do a.qux end -- (2)
qux do end
end
Now the following code would crash:
a: A; b: B
...
create b
a := b
a.foo (create {A})
Indeed, a.foo would call version (2) because a is attached to an object of type B. However, the argument passed to this feature will be of type A. And A has no feature qux that leads to a run-time error. This kind of an error is know as a CAT-call (Changing Availability or Type).
A solution to this issue is to avoid using covariance together with polymorphism, i.e. a call should not be polymorphic or there should be no covariant redeclaration of arguments. The work on this solution is in progress.
"a call should not be polymorphic or there should be no covariant redeclaration of arguments."
How can you tell?
Let's change your example a bit:
buzz (a_a: A)
do
a_a.foo (create {A})
end
This looks innocent enough. But if buzz receives an argument of dynamic type B you still get the catcall. The author of buzz might well be in a situation where the existence of B is unknown.
I think you need to drop the "a call should not be polymorphic or " bit of the advice. Simply prohibit covariant redeclaration of arguments.

What to_unsigned does?

Could someone please explain me how VHDL's to_unsigned works or confirm that my understanding is correct?
For example:
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31))
Here is my understanding:
-30 is a signed value, represented in bits as 1111111111100010
all bits should be inverted and to it '1' added to build the value of C
0000000000011101+0000000000000001 == 0000000000011111
In IEEE package numeric_std, the declaration for TO_UNSIGNED:
-- Id: D.3
function TO_UNSIGNED (ARG, SIZE: NATURAL) return UNSIGNED;
-- Result subtype: UNSIGNED(SIZE-1 downto 0)
-- Result: Converts a non-negative INTEGER to an UNSIGNED vector with
-- the specified SIZE.
You won't find a declared function to_unsigned with an argument or size that are declared as type integer. What is the consequence?
Let's put that in a Minimal, Complete, and Verifiable example:
library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;
entity what_to_unsigned is
end entity;
architecture does of what_to_unsigned is
signal C: std_logic_vector (31 downto 0);
begin
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31));
end architecture;
A VHDL analyzer will give us an error:
ghdl -a what_to_unsigned.vhdl
what_to_unsigned.vhdl:12:53: static constant violates bounds
ghdl: compilation error
And tell us -30 (line 12:character 53) has a bounds violation. Meaning in this case the numerical literal converted to universal_integer doesn't convert to type natural in the function to_unsigned.
A different tool might tell us a bit more graphically:
nvc -a what_to_unsigned.vhdl
** Error: value -30 out of bounds 0 to 2147483647 for parameter ARG
File what_to_unsigned.vhdl, Line 12
C(30 DOWNTO 0) <= std_logic_vector (to_unsigned(-30, 31));
^^^
And actually tells us where in the source code the error is found.
It's safe to say what you think to_unsigned does is not what the analyzer thinks it does.
VHDL is a strongly typed language, you tried to provide a value to place where that value is out of range for the argument ARG in function TO_UNSIGNED declared in IEEE package numeric_std.
The type NATURAL is declared in package standard and is made visible by an inferred declaration library std; use std.standard.all; in the context clause. (See IEEE Std 1076-2008, 13.2 Design libraries):
Every design unit except a context declaration and package STANDARD is
assumed to contain the following implicit context items as part of its
context clause:
library STD, WORK; use STD.STANDARD.all;
The declaration of natural found in 16.3 Package STANDARD:
subtype NATURAL is INTEGER range 0 to INTEGER'HIGH;
A value declared as a NATURAL is a subtype of INTEGER that has a constrained range excluding negative numbers.
And about here you can see you have the ability to answer this question with access to a VHDL standard compliant tool and referencing the IEEE Std 1076-2008, IEEE Standard VHDL Language Reference Manual.
The TL:DR; detail
You could note that 9.4 Static expressions, 9.4.1 General gives permission to evaluate locally static expressions during analysis:
Certain expressions are said to be static. Similarly, certain discrete ranges are said to be static, and the type marks of certain subtypes are said to denote static subtypes.
There are two categories of static expression. Certain forms of expression can be evaluated during the analysis of the design unit in which they appear; such an expression is said to be locally static.
Certain forms of expression can be evaluated as soon as the design hierarchy in which they appear is elaborated; such an expression is said to be globally static.
There may be some standard compliant tools that do not evaluate locally static expressions during analysis. "can be" is permissive not mandatory. The two VHDL tools demonstrated on the above code example take advantage of that permission. In both tools the command line argument -a tells the tool to analyze the provided file which is if successful, inserted into the current working library (WORK by default, see 13.5 Order of analysis, 13.2 Design libraries).
Tools that evaluate bounds checking at elaboration for locally static expressions are typically purely interpretive and even that can be overcome with a separate analysis pass.
The VHDL language can be used for formal specification of a design model used in formal proofs within the bounds specified by Annex D Potentially nonportable constructs and when relying on pure functions only (See 4.Subprograms and packages, 4.1 General).
VHDL compliant tools are guaranteed to give the same results, although there is no standardization of error messages nor limitations placed on tool implementation methodology.
to_unsigned is for converting between different types:
signal i : integer := 2;
signal u : unsigned(3 downto 0);
...
u <= i; -- Error, incompatible types
u <= to_unsigned(i, 4); -- OK, conversion function does the right thing
If you try to convert a negative integer, this is an error.
u <= to_unsigned(-2, 4); -- Error, does not work with negative numbers
If you simply want to invert an integer, i.e. 2 becomes -2, 5 becomes -5, just use the - operator:
u <= to_unsigned(-i, 4); -- OK as long as `i` was negative or zero
If you want the absolute value, a function for this is provided by the numeric_std library.
u <= to_unsigned(abs(i), 4);

checking for self-assignment in fortran overloaded assignment

I am trying to implement a polynomial class with fortran 2003, with overloaded arithmetic operations and assignments. The derived type maintains allocatable list of term definitions and coefficients, like this
type polynomial
private
type(monomial),dimension(:),allocatable :: term
double precision,dimension(:),allocatable :: coef
integer :: nterms=0
contains
...
end type polynomial
interface assignment(=)
module procedure :: polynomial_assignment
end interface
...
contains
elemental subroutine polyn_assignment(lhs,rhs)
implicit none
type(polynomial),intent(???) :: lhs
type(polynomial),intent(in) :: rhs
...
I had to make it elemental because this is intended to be used as matrices of polynomials. That does work, for the most cases at least. However, I somehow got myself into concerns about self-assignment here. One can simply check the pointers to see if things are the same in C++, but it doesn't seem to be an option in Fortran. However the compiler do detect the self-assignment and gave me a warning. (gfortran 4.9.0)
When I have intent(out) for lhs, the allocatable entries for both lhs and rhs appeared to be deallocated on entry to the subroutine, which made sense since they were both p, and an intent(out) argument would first be finalized.
Then I tried to avoid the deallocation with an intent(inout), and check self-assignment by modifying one field in the lhs output
elemental subroutine polyn_assignment(lhs,rhs)
implicit none
type(polynomial),intent(inout) :: lhs
type(polynomial),intent(in) :: rhs
lhs%nterms=rhs%nterms-5
if(lhs%nterms==rhs%nterms)then
lhs%nterms=rhs%nterms+5
return
end if
lhs%nterms=rhs%nterms
Well, now this is what surprised me. When i do
p=p
It didn't make the test and proceeded, giving me a polynomial with 0 terms but no memory violations. Confused, I printed lhs%nterms and rhs%nterms inside the assignment, only to find that they are different!
What is even more confusing is that when I did the same thing with
call polyn_assignment(p,p)
It works perfectly and detected that both arguments are the same. I am puzzled how an interface of a subroutine can run differently from the subroutine itself.
Is there something special about assignment in Fortran 2003 that I've missed?
(First time to ask a question here. Please correct me if i didn't do it right.)
If you have a statement a = b that invokes defined assignment via a subroutine sub, the assignment statement is equivalent to call sub(a, (b)). Note the parentheses - the right hand side argument is the result of evaluating a parenthesised expression and is therefore not conceptually the same object as b. See F2008 12.4.3.4.3 for details.
Consequently, a = a is equivalent to call sub(a, (a)). The two arguments are not aliased. It is different from call sub(a,a), the latter may (depending on the specifics of the internals of sub, including dummy argument attributes) break Fortran's argument aliasing rules (e.g. in your example, a statement such as call polyn_subroutine(a,a) is illegal).