Ada: deletion from a heterogeneous list - operator-overloading

My assignment requires the creation of a package that creates a heterogeneous (using inheritance) doubly linked list. Inserting nodes into the list is simple enough, but my issue comes in when I have to locate a node containing certain information.
PACKAGE AbstList IS
TYPE AbstractList IS LIMITED PRIVATE;
TYPE Node IS TAGGED PRIVATE;
TYPE NodePtr IS ACCESS ALL Node'Class;
PROCEDURE Init_Head(List: ACCESS AbstractList);
PROCEDURE InsertFront(List: ACCESS AbstractList; Item: IN NodePtr; Success: OUT Boolean);
PROCEDURE InsertRear(List: ACCESS AbstractList; Item: IN NodePtr; Success: OUT Boolean);
FUNCTION ListSize(List: ACCESS AbstractList) RETURN Integer;
-- The following are commented out as they are not complete in the package body
--FUNCTION FindItem(List: ACCESS AbstractList; Value: NodePtr) RETURN NodePtr;
--PROCEDURE Delete(List: ACCESS AbstractList; Item: NodePtr);
--PROCEDURE Print(List: ACCESS AbstractList);
PRIVATE
TYPE Node IS TAGGED RECORD
Rlink, Llink: NodePtr;
END RECORD;
TYPE AbstractList IS LIMITED RECORD
Count: Integer := 0;
Head: NodePtr := NEW Node;
END RECORD;
END AbstList;
One such record that I am using to insert into the list is the following:
TYPE CarName IS (GMC, Chevy, Ford, RAM);
TYPE Car IS NEW AbstList.Node WITH RECORD
NumDoors: Integer;
Manufacturer: CarName := GMC; -- Default manu.
END RECORD;
So for example, how could I find a node in the list that contains a specified "Manufacturer"? It was suggested to me that I overload the "=" operator, though I am not sure how this would work given what I have. Any suggestions would be appreciated.

According to the ARM, the equality operator is predefined for the non-limited types which is the case for your Node type.
If you want a different behaviour than the default (equality of all member of your record, just override it.
The function spec is in the same page, just T with your Node type (in this case Car) and write what you want

The Ada 95 Rationale Part 2 Chapter 4 says in 4.3 Class Wide Types and Operations
The predefined equality operators and the membership tests are generalized to apply to class-wide types. Like other predefined operations on such types, the implementation will depend on the particular specific type of the operands. Unlike normal dispatching operations, however, Constraint_Error is not raised if the tags of the operands do not match.
For equality, tag mismatch is treated as inequality. Only if the tags match is a dispatch then performed to the type-specific equality checking operation. This approach allows a program to safely compare two values of a class-wide tagged type for equality, without first checking that their tags match. The fact that no exception is raised in such an equality check is consistent with the other predefined relational operators, as noted in [RM83 4.5.2(12)].
So you would expect to be able to say in for example FindItem
Current : Nodeptr := List.Head.Rlink;
begin
...
if Value.all = Current.all then
-- we’ve found a match
but the predefined equality for Car includes the Node components Llink, Rlink as well.
What’s required is an equality operation which just compares the Car components.
You might override predefined equality by saying
type Node is abstract tagged private;
function "=" (L, R : Node) return Boolean is abstract;
and then
type Car is new Abstlist.Node with record
Numdoors: Integer;
Manufacturer: Carname := Gmc;
end record;
overriding
function "=" (L, R : Car) return Boolean is
(L.Manufacturer = R.Manufacturer and then L.Numdoors = R.Numdoors);
(this is Ada 2012 syntax).

Related

A function call misbehaves in PostgreSQL

I use PostgreSQL 10.3.
I have created the following domains:
CREATE DOMAIN common.citext_nullable
AS extensions.citext;
CREATE DOMAIN common.citext_not_null
AS extensions.citext NOT NULL;
CREATE DOMAIN common.smallint_ge_zero_nullable
AS smallint;
ALTER DOMAIN common.smallint_ge_zero_nullable
ADD CONSTRAINT smallint_ge_zero_nullable_check CHECK (value >= 0);
and the following function:
CREATE OR REPLACE FUNCTION common.fun_name(
p_1 common.citext_not_null,
p_2 common.citext_nullable,
p_3 common.citext_nullable,
p_4 common.smallint_ge_zero_nullable)
RETURNS ...
LANGUAGE 'plpgsql'
AS $BODY$
DECLARE
...
BEGIN
...
END;
$BODY$;
Notes:
All parameters/arguments are of domain types.
Domains and functions are in the same schema "common".
The schema "common" is included in the search path.
All extensions are in the schema "extensions".
The schema "extensions" is also included in the search path.
"citext"-based domains work as expected.
"smallint"-based domain works strangely.
The above domains and function are simplified for the scope of the question.
I can call the function either by
SELECT fun_name('any', 'any', 'any', 5::smallint_ge_zero_nullable);
or even by
SELECT fun_name('any', 'any', 'any', '5');
but I cannot call it by:
SELECT fun_name('any', 'any', 'any', 5);
I get the following error:
SQL Error [42883]: ERROR: function fun_name(unknown, unknown, unknown, integer) does not exist
Hint: No function matches the given name and argument types. You might need to add explicit type casts.
Position: 8
Why "citext"-based arguments are shown as "unknown"? As per doc, page 1431
argtype
The data type(s) of the function's arguments (optionally schema-qualified), if any. The argument types can be base, composite, or domain types, or ...
(It is "funny" the "unknown" arguments, in the end, to be accepted and work as expected and the "integer" argument not to be accepted and behave strangely.)
This behavior is related to int - smallint casting and not to the domain.
You can find the rules for associating a function call to the function with the proper parameters here. It will use implicit cast when available and will always match 'unknown' types to anything. Since you have only one signature for your function, case 1 (explicit cast) and 2 (all unknown) will be matched to your function.
There is no automatic down casting, so integer -> smallInt won't occur implicitely. Let's think about a function having two signatures f(input as int) and f(input as smallint) If downscasting was to occur, which one should be used when calling f(5)? This mailing-list thread will give more details.
So the solutions are to either
- do the explicit casting (case 1)
- or to have a function wrapper with the generic types (integer) that do the casting for you (and handles errors..)
- or to call the function with the output of a table column having the proper type.

Why does Haskell says this is ambiguous?

I have a type class defined like this :
class Repo e ne | ne -> e, e -> ne where
eTable :: Table (Relation e)
And when I try to compile it I get this :
* Couldn't match type `Database.Selda.Generic.Rel
(GHC.Generics.Rep e0)'
with `Database.Selda.Generic.Rel (GHC.Generics.Rep e)'
Expected type: Table (Relation e)
Actual type: Table (Relation e0)
NB: `Database.Selda.Generic.Rel' is a type function, and may not be injective
The type variable `e0' is ambiguous
* In the ambiguity check for `eTable'
To defer the ambiguity check to use sites, enable AllowAmbiguousTypes
When checking the class method:
eTable :: forall e ne. Repo e ne => Table (Relation e)
In the class declaration for `Repo'
|
41 | eTable :: Table (Relation e)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I was expecting everything to be unambiguous since I've explicitly stated that e determines ne and vice versa.
However, if I try to define my class like this just for the testing purposes, it compiles :
data Test a = Test a
class Repo e ne | ne -> e, e -> ne where
eTable :: Maybe (Test e)
I'm not quite sure what is the deal with Table and Relation types that causes this.
Test is injective, since it is a type constructor.
Relation is not injective, since it is a type family.
Hence the ambiguity.
Silly example:
type instance Relation Bool = ()
type instance Relation String = ()
instance Repo Bool Ne where
eTable :: Table ()
eTable = someEtable1
instance Repo String Ne where
eTable :: Table ()
eTable = someEtable2
Now, what is eTable :: Table () ? It could be the one from the first or the second instance. It is ambiguous since Relation is not injective.
The source of the ambiguity actually has nothing to do with ne not being used in the class (which you headed off by using functional dependencies).
The key part of the error message is:
Expected type: Table (Relation e)
Actual type: Table (Relation e0)
NB: `Database.Selda.Generic.Rel' is a type function, and may not be injective
Note that it's the e that it's having trouble matching up, and the NB message drawing your attention to the issue of type functions and injectivity (you really have to know what that all means for the message to be useful, but it has all the terms you need to look up to understand what's going on, so it's quite good as programming error messages go).
The issue it's complaining about is a key difference between type constructors and type families. Type constructors are always injective, while type functions in general (and type families in particular) do not have to be.
In standard Haskell with no extensions, the only way you can build compound type expressions was using type constructors, such as the left-hand side Test in your data Test a = Test a. I can apply Test (of kind * -> *) to a type like Int (of kind *) to get a type Test Int (of kind *). Type constructors are injective, which means for any two distinct types a and b, Test a is a distinct type from Test b1. This means that when type checking you can "run them backwards"; if I've got two types t1 and t2 that are each the result of applying Test, and I know that t1 and t2 are supposed to be equal, then I can "unapply" Test to get the argument types and check whether those are equal (or infer what one of them is if it was something I hadn't figured out yet and the other is known, or etc).
Type families (or any other form of type function that isn't known to be injective) don't provide us that guarantee. If I have two types t1 and t2 that are supposed to be equal, and they're both the result of applying some TypeFamily, there's no way to go from the resulting types to the types that TypeFamily was applied to. And in particular, there's no way to conclude from the fact that TypeFamily a and TypeFamily b are equal that a and b are equal as well; the type family might just happen to map two distinct types a and b to the same result (the definition of injectivitiy is that it doesn't do that). So if I knew which type a was but didn't know b, knowing that TypeFamily a and TypeFamily b are equal doesn't give me any more information about what type b should be.
Unfortunately, since standard Haskell only has type constructors, Haskell programmers get well-trained to just presume that the type checker can work backwards through compound types to connect up the components. We often don't even notice that the type checker needs to work backwards, we're so used to just looking at type expressions with similar structure and leaping to the obvious conclusions without working through all the steps that the type checker has to go through. But because type checking is based on working out the type of every expression both bottom-up2 and top-down3 and confirming that they are consistent, type checking expressions whose types involve type families can easily run into ambiguity problems where it looks "obviously" unambiguous to us humans.
In your Repo example, consider how the type checker will deal with a position where you use eTable, with (Int, Bool) for e, say. The top-down view will see that it's used in a context where some Table (Relation (Int, Bool)) is required. It'll compute what Relation (Int, Bool) evaluates to: say it's Set Int, so we need Table (Set Int). The bottom-up pass just says eTable can be Table (Relation e) for any e.
All of our experience with standard Haskell tells us that this is obvious, we just instantiate e to (Int, Bool), Relation (Int, Bool) evaluates to Set Int again and we're done. But that's not actually how it works. Because Relation isn't injective there could be some other choice for e for which gives us Set Int for Relation e: perhaps Int. But if we choose e to be (Int, Bool) or Int we need to look for two different Repo instances, which will have different implementations for eTable, even though their type is the same.
Even adding a type annotation every time you use eTable like eTable :: Table (Relation (Int, Bool)) doesn't help. The type annotation only adds extra information to the top-down view, which we often already have anyway. The type-checker is still stuck with the problem that there could be (whether or not there actually are) other choices of e than (Int, Bool) which lead to eTable matching that type annotation, so it doesn't know which instance to use. Any possible use of eTable will have this problem, so it gets reported as an error when you're defining the class. It's basically for the same reason you get problems when you have a class with some members whose types don't use all of the type variables in the class head; you have to consider "variable only used under a type family" as much the same as "variable isn't used at all".
You could address this by adding a Proxy argument to eTable so that there's something fixing the type variable e that the type checker can "run backwards". So eTable :: Proxy e -> Table (Relation e).
Alternatively, with the TypeApplications extension you now can do as the error message suggests and turn on AllowAmbiguousTypes to get the class accepted, and then use things like eTable #(Int, Bool) to tell the compiler which choice for e you want. The reason this works where the type annotation eTable :: Table (Relation (Int, Bool)) doesn't work is the type annotation is extra information added to the context when the type checker is looking top-down, but the type application adds extra information when the type checker is looking bottom-up. Instead of "this expression is required to have a type that unifies with this type" it's "this polymorphic function is instantiated at this type".
1 Type constructors are actually even more restricted than just injectivity; applying Test to any type a results in a type with known structure Test a, so the entire universe of Haskell types is straightforwardly mirrored in types of the form Test t. A more general injective type function could instead do more "rearranging", such as mapping Int to Bool so long as it didn't also map Bool to Bool.
2 From the type produced by combining the sub-parts of the expression
3 From the type required of the context in which it is used

Is there a way to disable function overloading in Postgres

My users and I do not use function overloading in PL/pgSQL. We always have one function per (schema, name) tuple. As such, we'd like to drop a function by name only, change its signature without having to drop it first, etc. Consider for example, the following function:
CREATE OR REPLACE FUNCTION myfunc(day_number SMALLINT)
RETURNS TABLE(a INT)
AS
$BODY$
BEGIN
RETURN QUERY (SELECT 1 AS a);
END;
$BODY$
LANGUAGE plpgsql;
To save time, we would like to invoke it as follows, without qualifying 1 with ::SMALLINT, because there is only one function named myfunc, and it has exactly one parameter named day_number:
SELECT * FROM myfunc(day_number := 1)
There is no ambiguity, and the value 1 is consistent with SMALLINT type, yet PostgreSQL complains:
SELECT * FROM myfunc(day_number := 1);
ERROR: function myfunc(day_number := integer) does not exist
LINE 12: SELECT * FROM myfunc(day_number := 1);
^
HINT: No function matches the given name and argument types.
You might need to add explicit type casts.
When we invoke such functions from Python, we use a wrapper that looks up functions' signatures and qualifies parameters with types. This approach works, but there seems to be a potential for improvement.
Is there a way to turn off function overloading altogether?
Erwin sent a correct reply. My next reply is related to possibility to disable overloading.
It is not possible to disable overloading - this is a base feature of PostgreSQL function API system - and cannot be disabled. We know so there are some side effects like strong function signature rigidity - but it is protection against some unpleasant side effects when function is used in Views, table definitions, .. So you cannot to disable it.
You can simply check if you have or have not overloaded functions:
postgres=# select count(*), proname
from pg_proc
where pronamespace <> 11
group by proname
having count(*) > 1;
count | proname
-------+---------
(0 rows)
This is actually not directly a matter of function overloading (which would be impossible to "turn off"). It's a matter of function type resolution. (Of course, that algorithm could be more permissive without overloaded functions.)
All of these would just work:
SELECT * FROM myfunc(day_number := '1');
SELECT * FROM myfunc('1'); -- note the quotes
SELECT * FROM myfunc(1::smallint);
SELECT * FROM myfunc('1'::smallint);
Why?
The last two are rather obvious, you mentioned that in your question already.
The first two are more interesting, the explanation is buried in the Function Type Resolution:
unknown literals are assumed to be convertible to anything for this purpose.
And that should be the simple solution for you: use string literals.
An untyped literal '1' (with quotes) or "string literal" as defined in the SQL standard is different in nature from a typed literal (or constant).
A numeric constant 1 (without quotes) is cast to a numeric type immediately. The manual:
A numeric constant that contains neither a decimal point nor an
exponent is initially presumed to be type integer if its value fits in
type integer (32 bits); otherwise it is presumed to be type bigint if
its value fits in type bigint (64 bits); otherwise it is taken to be
type numeric. Constants that contain decimal points and/or exponents
are always initially presumed to be type numeric.
The initially assigned data type of a numeric constant is just a
starting point for the type resolution algorithms. In most cases the
constant will be automatically coerced to the most appropriate type
depending on context. When necessary, you can force a numeric value to
be interpreted as a specific data type by casting it.
Bold emphasis mine.
The assignment in the function call (day_number := 1) is a special case, the data type of day_number is unknown at this point. Postgres cannot derive a data type from this assignment and defaults to integer.
Consequently, Postgres looks for a function taking an integer first. Then for functions taking a type only an implicit cast away from integer, in other words:
SELECT casttarget::regtype
FROM pg_cast
WHERE castsource = 'int'::regtype
AND castcontext = 'i';
All of these would be found - and conflict if there were more than one function. That would be function overloading, and you would get a different error message. With two candidate functions like this:
SELECT * FROM myfunc(1);
ERROR: function myfunc(integer) is not unique
Note the "integer" in the message: the numeric constant has been cast to integer.
However, the cast from integer to smallint is "only" an assignment cast. And that's where the journey ends:
No function matches the given name and argument types.
SQL Fiddle.
More detailed explanation in these related answers:
PostgreSQL ERROR: function to_tsvector(character varying, unknown) does not exist
Generate series of dates - using date type as input
Dirty fix
You could fix this by "upgrading" the cast from integer to smallint to an implicit cast:
UPDATE pg_cast
SET castcontext = 'i'
WHERE castsource = 'int'::regtype
AND casttarget = 'int2'::regtype;
But I would strongly discourage tampering with the default casting system. Only consider this if you know exactly what you are doing. You'll find related discussions in the Postgres lists. It can have all kinds of side effects, starting with function type resolution, but not ending there.
Aside
Function type resolution is completely independent from the used language. An SQL function would compete with PL/perl or PL/pgSQL or "internal" functions just the same. The function signature is essential. Built-in functions only come first, because pg_catalog comes first in the default search_path.
There are plenty of in built functions that are overloaded, so it simply would not work if you turned off function overloading.

Fortran Select Type with arrays [duplicate]

My question is, "Can a select type block be used to distinguish real :: realInput from real :: realArrayInput(:)?" It's clear how select type may be used to distinguish derived types, but becomes less clear to me how (or whether) it may be used on intrinsic types.
In Mad Libs form, can the blanks be filled in below to distinguish between the inputs above:
select type (input)
type is (real)
print *, "I caught the realInput"
type is (___________)
print *, "I caught the realArrayInput"
end select
I've found some related posts that did not quite contain the answer I was hoping for:
Select Type Issues
Determining Variable Type
No. input is either declared as an array or a scalar, even when it is polymorphic (and even when it is unlimited polymorphic).
The recent further interoperability with C TS (which may be part of F201X) introduced the concept of assumed rank and the RANK intrinsic, which may do what you want. But there are many limitations around what can be done with assumed rank objects. And regardless of that SELECT TYPE still only works on type. The syntax of the select type construct simply doesn't permit specification of rank in the type guard statements.
Obviously depending on what it is that you actually want to do (?) ... and beyond generic interfaces mentioned by others, a way to have objects that can be either array or scalar in current Fortran (there are other possibilities) is to use derived type wrappers that are an extension of a common parent type. You then use a polymorphic object declared as the parent type (or you can use an unlimited polymorphic object) to refer to an object of the relevant derived type.
TYPE :: parent
END TYPE parent
TYPE, EXTENDS(parent) :: scalar_wrapper
REAL :: scalar_component
END TYPE scalar_wrapper
TYPE, EXTENDS(parent) :: array_wrapper
REAL :: array_component(10)
END TYPE array_wrapper
...
SUBROUTINE what_am_i(object)
! Note that object is scalar, but that doesn't tell us
! the rank of the components of the dynamic type of object.
CLASS(parent), INTENT(IN) :: object
!****
SELECT TYPE (object)
TYPE IS (scalar_wrapper)
PRINT "('I am a scalar with value ',G0)", &
object%scalar_component
TYPE IS (array_wrapper)
PRINT "('I am an array with values ',*(G0,:,','))", &
object%array_component
CLASS DEFAULT
PRINT "('I am not sure what I am.')"
END SELECT
END SUBROUTINE what_am_i
Just to combine IanH's anwser and M.S.B's comment and explain more in detail: You can not use the select type construct to distinguish between real scalars and real arrays as they only differ in their dimension, but not in their type. When you declare your variable input, you already decide 'for ever', whether it has or it has not the dimension attribute:
class(*) :: input_scalar
class(*), dimension(10) :: input_array
Whichever value the variable takes later (or to whichever object it points to, if it is a pointer), it can not represent something with a dimensionality (rank) different from the one in its declaration.
On the other hand, you could for example use the interface construct (or generic in type bound procedures) to distinguish between objects of the same type but different ranks. The example below demonstrates that for scalar and rank one integer and real arrays.
module testmod
implicit none
interface typetest
module procedure typetest0, typetest1
end interface typetest
contains
subroutine typetest0(object)
class(*), intent(in) :: object
select type(object)
type is (real)
print *, "real scalar"
type is (integer)
print *, "integer scalar"
end select
end subroutine typetest0
subroutine typetest1(object)
class(*), dimension(:), intent(in) :: object
select type(object)
type is (real)
print *, "real array"
type is (integer)
print *, "integer array"
end select
end subroutine typetest1
end module testmod
program test
use testmod
implicit none
integer :: ii
integer, dimension(10) :: iarray
call typetest(ii) ! invokes typetest0
call typetest(iarray) ! invokes typetest1
end program test

Types and classes of variables

Two R questions:
What is the difference between the type (returned by typeof) and the class (returned by class) of a variable? Is the difference similar to that in, say, C++ language?
What are possible types and classes of variables?
In R every "object" has a mode and a class. The former represents how an object is stored in memory (numeric, character, list and function) while the later represents its abstract type. For example:
d <- data.frame(V1=c(1,2))
class(d)
# [1] "data.frame"
mode(d)
# [1] "list"
typeof(d)
# list
As you can see data frames are stored in memory as list but they are wrapped into data.frame objects. The latter allows for usage of member functions as well as overloading functions such as print with a custom behavior.
typeof(storage.mode) will usually give the same information as mode but not always. Case in point:
typeof(c(1,2))
# [1] "double"
mode(c(1,2))
# [1] "numeric"
The reasoning behind this can be found here:
The R specific function typeof returns the type of an R object
Function mode gives information about the mode of an object in the sense of Becker, Chambers & Wilks (1988), and is more compatible with other implementations of the S language
The link that I posted above also contains a list of all native R basic types (vectors, lists etc.) and all compound objects (factors and data.frames) as well as some examples of how mode, typeof and class are related for each type.
type really refers to the different data structures available in R. This discussion in the R Language Definition manual may get you started on objects and types.
On the other hand, class means something else in R than what you may expect. From
the R Language Definition manual (that came with your version of R):
2.2.4 Classes
R has an elaborate class system1, principally controlled via the class attribute. This attribute is a character vector containing the list
of classes that an object inherits from. This forms the basis of the “generic methods” functionality in R.
This attribute can be accessed and manipulated virtually without restriction by users. There is no checking that an object actually contains the components that class methods expect. Thus, altering the class attribute should be done with caution, and when they are available specific creation and coercion functions should be preferred.