Is a is_variadic type trait possible in C++17? - variadic-templates

Is it possible, in C++17, to design a type trait which would detect whether a callable is variadic (and therefore can take an arbitrarily long number of parameters) or not?
template <class Callable>
struct is_variadic;
I currently do not see how to do that, but I cannot convince myself that it's not doable. So if it's doable what would that look like?

Related

scala.collection.immutable.NumericRange[UInt]?

Trying to make a scala.collection.immutable.NumericRange[UInt]
Looks like it needs a scala.math.Integral[UInt].
But there does not seem to be a spire.math.Integral[UInt].
I am assuming thats because UInt violates the laws around Integral in some way.
I am mostly interested in NumericRange[UInt].contains(x: UInt)
Is it folly for me to attempt to construct a scala.math.Integral[UInt] on my own?
Or should I find some other way to get contains?
Is there a trait that should exist that should be inherited by Set[T] and Range and NumericRange[T] that declares contains[T]
What should that trait be called?
Should I do this as a type class?
What should I call this type class?
If you just need contains(x: UInt), you should use spire.math.Interval[UInt]
See: https://typelevel.org/spire/api/spire/math/Interval.html
If you need other bits of the NumericRange[UInt] then see other answers that arrive in the future.

Is polymorphism strictly a type-theory?

I am learning OOP concepts, which do not really have well-established definitions.
I heard different things about polymorphism and can not decide what is right.
Most people will say that it is a type-theory. Meaning that a function is able to accept multiple types of parameters that have something in common.
Ad hoc polymorphism is about different overloads of the same function.
Parametric polymorphism is generic functions.
Subtyping polymorphism is that if a function accepts a certain class as parameter, it can also accept its subclasses. (Of course only those can be passed as parameter which are not abstract but concrete).
There is a seemingly different definition. There are those who say polymorphism means that a function can have different implementations (morphs/forms).
In that sense...
- interface functions,
- abstract classes’ abstract functions,
- and virtual functions that can be overridden by the subtype
...are all considered polymorphic.
As I was told, polymorphism in this sense can be defined as having different results if the same function is called on different objects.
And adding to the confusion, someone said only virtual functions are polymorphic because they already have an implementation.
For me the first way I presented polymorphism and the second seem completely different, but maybe they both fit the definition of polymorphism and it is just me being unable to understand it.
So what is polymorphism in programming? Is it just a type-theory?
In this question I would like to refer to this question:
https://stackoverflow.com/questions/25163683/polymorphism-and-interfaces-clarification#=
It raises almost the same problem, but I could not really make out the conclusion.
Yes and No.
Yes, in classic inherited languages it works that way.
No, since in other languages the calling of a method on an object might be dynamically resolved. (e.g. by runtime code searching in a list of objects as a field, so called aggregate in COM terms)
IOW that that method exists in the type of the object is not defined in type theory. At least not universal. The language might not even be typed.
For a statically inherited object model, it is however true. IOW Typing (subtyping/inheritance, de concept of virtual methods) is an implementation of polymorphism in languages with such object model. But not all languages do.
Some have dispatch polymorphism, and can add methods runtime (like objective C) or figure out of the method exists at all (e.g. COM IDispatch )
The classic test of polymorphism is the "the duck quacks". Where you have a generic "animal" and call a method for "makesound", and if you assigned a duck it "quacks". So you call a method (pass a message in old OO jargon) on an generic object, and you get the behaviour of the more specialized object assigned to it.
What constitutes a "generic" object depends on the language. In statically inherited languages the generic object must have the method declared, sometimes with special modifiers (virtual) to signal overridability.
In other languages the generic object can be the root object, and the runtime will figure out if it has a makesound method.

The object-functional impedance mismatch

In OOP it is good practice to talk to interfaces not to implementations. So, e.g., you write something like this (by Seq I mean scala.collection.immutable.Seq :)):
// talk to the interface - good OOP practice
doSomething[A](xs: Seq[A]) = ???
not something like the following:
// talk to the implementation - bad OOP practice
doSomething[A](xs: List[A]) = ???
However, in pure functional programming languages, such as Haskell, you don't have subtype polymorphism and use, instead, ad hoc polymorphism through type classes. So, for example, you have the list data type and a monadic instance for list. You don't need to worry about using an interface/abstract class because you don't have such a concept.
In hybrid languages, such as Scala, you have both type classes (through a pattern, actually, and not first-class citizens as in Haskell, but I digress) and subtype polymorphism. In scalaz, cats and so on you have monadic instances for concrete types, not for the abstract ones, of course.
Finally the question: given this hybridism of Scala do you still respect the OOP rule to talk to interfaces or just talk to concrete types to take advantage of functors, monads and so on directly without having to convert to a concrete type whenever you need to use them? Put differently, is in Scala still good practice to talk to interfaces even if you want to embrace FP instead of OOP? If not, what if you chose to use List and, later on, you realized that a Vector would have been a better choice?
P.S.: In my examples I used a simple method, but the same reasoning applies to user defined types. E.g.:
case class Foo(bars: Seq[Bar], ...)
What I would attack here is your "concrete vs. interface" concept. Look at it this way: every type has an interface, in the general sense of the term "interface." A "concrete" type is just a limiting case.
So let's look at Haskell lists from this angle. What's the interface of a list? Well, lists are an algebraic data type, and all such data types have the same general form of interface and contract:
You can construct instances of the type using its constructors according to their arities and argument types;
You can observe instances of the type by matching against their constructors according to their arities and argument types;
Construction and observation are inverses—when you pattern match against a value, what you get out is exactly what was put into it.
If you look at it in these terms, I think the following rule works pretty well in either paradigm:
Choose types whose interfaces and contracts match exactly with your requirements.
If their contract is weaker than your requirements, then they won't maintain invariants that you need;
If their contracts are stronger than your requirements, you may unintentionally couple yourself to the "extra" details and limit your ability to change the program later on.
So you no longer ask whether a type is "concrete" or "abstract"—just whether it fits your requirements.
These are my two cents on this subject. In Haskell you have data types (ADTs). You have both lists (linked lists) and vectors (int-indexed arrays) but they don't share a common supertype. If your function takes a list you cannot pass it a vector.
In Scala, being it a hybrid OOP-FP language, you have subtype polymorphism too so you may not care if the client code passes a List or a Vector, just require a Seq (possibly immutable) and you're done.
I guess to answer to this question you have to ask yourself another question: "Do I want to embrace FP in toto?". If the answer is yes then you shouldn't use Seq or any other abstract superclass in the OOP sense. Of course, the exception to this rule is the use of a trait/abstract class when defining ADTs in Scala. For example:
sealed trait Tree[+A]
case object Empty extends Tree[Nothing]
case class Node[A](value: A, left: Tree[A], right: Tree[A]) extends Tree[A]
In this case one would require Tree[A] as a type, of course, and then use, e.g., pattern matching to determine if it's either Empty or Node[A].
I guess my feeling about this subject is confirmed by the red book (Functional Programming in Scala). There they never use Seq, but List, Vector and so on. Also, haskellers, don't care about these problems and use lists whenever they need linked-list semantic and vectors whenever they need int-indexed-array semantic.
If, on the other hand, you want to embrace OOP and use Scala as a better Java then OK, you should follow the OOP best practice to talk to interfaces not to implementations.
If you're thinking: "I'd rather opt for mostly functional" then you should read Erik Meijer's The Curse of the Excluded Middle.

Benefit of explicitly providing the method return type or variable type in scala

This question may be very silly, but I am a little confused which is the best way to do in scala.
In scala, compiler does the type inference and assign the most closest(or may be Restrictive) type for each variable or a method.
I am new to scala, and from many sample code/ libraries, I have noticed that in many places people are not explicitly providing the types for most of the time. But, in most of the code I wrote, I was/still am explicitly providing the types. For eg:
val someVal: String = "def"
def getMeResult() : List[String]= {
val list:List[String] = List("abc","def")
list
}
The reason I started to write this especially for method return type is that, when I write a method itself, I know what it should return. So If I explicitly provide the return type, I can find out if I am making any mistakes. Also, I felt it is easier to understand what that method returns by reading the return type itself. Otherwise, I will have to check what the return type of the last statement.
So my questions/doubts are :
1. Does it take less compilation time since the compiler doesn't have to infer much? Or it doesn't matter much ?
2. What is the normal standard in the scala world?
From "Scala in Depth" chapter 4.5:
For a human reading a nontrivial method implementation, infering the
return type can be troubling. It’s best to explicitly document and
enforce return types in public APIs.
From "Programming in Scala" chapter 2:
Sometimes the Scala compiler will require you to specify the result
type of a function. If the function is recursive, for example, you
must explicitly specify the function’s result type.
It is often a good idea to indicate function result types explicitly.
Such type annotations can make the code easier to read, because the
reader need not study the function body to figure out the inferred
result type.
From "Scala in Action" chapter 2.2.3:
It’s a good practice to specify the return type for the users of the
library. If you think it’s not clear from the function what its return
type is, either try to improve the name or specify the return type.
From "Programming Scala" chapter 1:
Recursive functions are one exception where the execution scope
extends beyond the scope of the body, so the return type must be
declared.
For simple functions perhaps it’s not that important to show it
explicitly. However, sometimes the inferred type won’t be what’s
expected. Explicit return types provide useful documentation for the
reader. I recommend adding return types, especially in public APIs.
You have to provide explicit return types in the following cases:
When you explicitly call return in a method.
When a method is recursive.
When two or more methods are overloaded and one of them calls another; the calling method needs a return type annotation.
When the inferred return type would be more general than you intended, e.g., Any.
Another reason which has not yet been mentioned in the other answers is the following. You probably know that it is a good idea to program to an interface, not an implementation.
In the case of return values of functions or methods, that means that you don't want users of the function or method to know what specific implementation of some interface (or trait) the function returns - that's an implementation detail you want to hide.
If you write a method like this:
trait Example
class ExampleImpl1 extends Example { ... }
class ExampleImpl2 extends Example { ... }
def example() = new ExampleImpl1
then the return type of the method will be inferred to be ExampleImpl1 - so, it is exposing the fact that it is returning a specific implementation of trait Example. You can use an explicit return type to hide this:
def example(): Example = new ExampleImpl1
The standard rule is to use explicit types for API (in order to specify the type precisely and as a guard against refactoring) and also for implicits (especially because implicits without an explicit type may be ignored if the definition site is after the use site).
To the first question, type inference can be a significant tax, but that is balanced against the ease of both writing and reading expressions.
In the example, the type on the local list is not even a "better java." It's just visual clutter.
However, it should be easy to read the inferred type. Occasionally, I have to fire up the IDE just to tell me what is inferred.
By implication, methods should be short enough so that it's easy to scan for the result type.
Sorry for the lack of references. Maybe someone else will step forward; the topic is frequent on MLs and SO.
2. The scala style guide says
Use type inference where possible, but put clarity first, and favour explicitness in public APIs.
You should almost never annotate the type of a private field or a local variable, as their type will usually be immediately evident in their value:
private val name = "Daniel"
However, you may wish to still display the type where the assigned value has a complex or non-obvious form.
All public methods should have explicit type annotations. Type inference may break encapsulation in these cases, because it depends on internal method and class details. Without an explicit type, a change to the internals of a method or val could alter the public API of the class without warning, potentially breaking client code. Explicit type annotations can also help to improve compile times.
The twitter scala style guide says of method return types:
While Scala allows these to be omitted, such annotations provide good documentation: this is especially important for public methods. Where a method is not exposed and its return type obvious, omit them.
I think there's a broad consensus that explicit types should be used for public APIs, and shouldn't be used for most local variable declarations. When to use explicit types for "internal" methods is less clear-cut and more a matter of judgement; different organizations have different standards.
1. Type inference doesn't seem to visibly affect compilation time for the line where the inference happens (aside from a few rare cases with implicits which are basically compiler bugs) - after all, the compiler still has to check the type, which is pretty much the same calculation it would use to infer it. But if a method return type is inferred then anything using that method has to be recompiled when that method changes.
So inferring a method (or public variable) that's used in many places can slow down compilation (particularly if you're using incremental compilation). But inferring local or private variables, private methods, or public methods that are only used in one or two places, makes no (significant) difference.

Practical uses for Structural Types?

Structural types are one of those "wow, cool!" features of Scala. However, For every example I can think of where they might help, implicit conversions and dynamic mixin composition often seem like better matches. What are some common uses for them and/or advice on when they are appropriate?
Aside from the rare case of classes which provide the same method but aren't related nor do implement a common interface (for example, the close() method -- Source, for one, does not extend Closeable), I find no use for structural types with their present restriction. If they were more flexible, however, I could well write something like this:
def add[T: { def +(x: T): T }](a: T, b: T) = a + b
which would neatly handle numeric types. Every time I think structural types might help me with something, I hit that particular wall.
EDIT
However unuseful I find structural types myself, the compiler, however, uses it to handle anonymous classes. For example:
implicit def toTimes(count: Int) = new {
def times(block: => Unit) = 1 to count foreach { _ => block }
}
5 times { println("This uses structural types!") }
The object resulting from (the implicit) toTimes(5) is of type { def times(block: => Unit) }, ie, a structural type.
I don't know if Scala does that for every anonymous class -- perhaps it does. Alas, that is one reason why doing pimp my library that way is slow, as structural types use reflection to invoke the methods. Instead of an anonymous class, one should use a real class to avoid performance issues in pimp my library.
Structural types are very cool constructs in Scala. I've used them to represent multiple unrelated types that share an attribute upon which I want to perform a common operation without a new level of abstraction.
I have heard one argument against structural types from people who are strict about an application's architecture. They feel it is dangerous to apply a common operation across types without an associative trait or parent type, because you then leave the rule of what type the method should apply to open-ended. Daniel's close() example is spot on, but what if you have another type that requires different behavior? Someone who doesn't understand the architecture might use it and cause problems in the system.
I think structural types are one of these features that you don't need that often, but when you need it, it helps you a lot. One area where structural types really shine is "retrofitting", e.g. when you need to glue together several pieces of software you have no source code for and which were not intended for reuse. But if you find yourself using structural types a lot, you're probably doing it wrong.
[Edit]
Of course implicits are often the way to go, but there are cases when you can't: Imagine you have a mutable object you can modify with methods, but which hides important parts of it's state, a kind of "black box". Then you have to work somehow with this object.
Another use case for structural types is when code relies on naming conventions without a common interface, e.g. in machine generated code. In the JDK we can find such things as well, like the StringBuffer / StringBuilder pair (where the common interfaces Appendable and CharSequence are way to general).
Structural types gives some benefits of dynamic languages to a statically linked language, specifically loose coupling. If you want a method foo() to call instance methods of class Bar, you don't need an interface or base-class that is common to both foo() and Bar. You can define a structural type that foo() accepts and whose Bar has no clue of existence. As long as Bar contains methods that match the structural type signatures, foo() will be able to call.
It's great because you can put foo() and Bar on distinct, completely unrelated libraries, that is, with no common referenced contract. This reduces linkage requirements and thus further contributes for loose coupling.
In some situations, a structural type can be used as an alternative to the Adapter pattern, because it offers the following advantages:
Object identity is preserved (there is no separate object for the adapter instance, at least in the semantic level).
You don't need to instantiate an adapter - just pass a Bar instance to foo().
You don't need to implement wrapper methods - just declare the required signatures in the structural type.
The structural type doesn't need to know the actual instance class or interface, while the adapter must know Bar so it can call its methods. This way, a single structural type can be used for many actual types, whereas with adapter it's necessary to code multiple classes - one for each actual type.
The only drawback of structural types compared to adapters is that a structural type can't be used to translate method signatures. So, when signatures doesn't match, you must use adapters that will have some translation logic. I particularly don't like to code "intelligent" adapters because in many times they are more than just adapters and cause increased complexity. If a class client needs some additional method, I prefer to simply add such method, since it usually doesn't affect footprint.