Scala's collection library contains the forwarders IterableForwarder, TraversableForwarder, SeqForwarder and proxies like IterableProxy, MapProxy, SeqProxy, SetProxy, TraversableProxy, etc. Forwarders and proxies both delegate collection methods to an underlying collection object. The main difference between these two are that forwarders don't forward calls that would create new collection objects of the same kind.
In which cases would I prefer one of these types over the other? Why and when are forwarders useful? And if they are useful why are there no MapForwarder and SetForwarder?
I assume proxies are most often used if one wants to build a wrapper for a collection with additional methods or to pimp the standard collections.
I think this answer provides some context about Proxy in general (and your assumption about wrapper and pimping would be correct).
As far as I can tell the subtypes of Proxy are more targeted to end users. When using Proxy the proxy object and the self object will be equal for all intent and purposes. I think that's actually the main difference. Don't use Proxy if that assumption does not hold.
The Forwarder traits only seems to be used to support ListBuffer and may be more appropriate if one needs to roll out their own collection class built on top of the CanBuildFrom infrastructure. So I would say it's more targeted to library writers where the library is based on the 2.8 collection design.
Related
Teamcenter provides 2 OOTB API sets, loose and strong. what is the difference between these two? When should we use loose API and when strong ones?
We actually have 3. Loose, Strong and RAC. RAC is explicitly for rich client customization. So if you are to call an SOA that you have authored in BMIDE in the rich client java code, you would do so by adding the RAC jars as dependency. In addition to that, you could have another application or a client that can talk to Teamcenter but may not be as dependent. Depending on that, you would choose either Loose or Strong jars. Loose and Strong represent the extent to which your custom application depends on Teamcenter.
In Teamcenter, you have different types of objects right - Dataset, Item, ItemRevision etc. With Strong jars, you would have corresponding Java classes Dataset, Item, ItemRevision etc and the attributes that are defined on the BOs in BMIDE are available in the form of getter/setters on the corresponding classes in Java like ItemRevision.get_date_released().
With Loose jars however, you would have a single class called ModelObject and any type of BO instance in code is represented by the class ModelObject and a query for property could be done via a get API like ModelObject.getPropertyObject("date_released");
So as you are seeing, Strong jars introduce tight coupling compared to Loose jars and which one to use depends on your usecase.
I see the benefit of interfaces, to be able to add new implementations via contract.
I dont see following problem:
Imagine you have interface DB with method "startTransaction".
Everything is fine you implement it in MySQL, PostgreSQL. But tomorrow you move to mongodb - then you have no transaction support.
What do you do?
1) Empty method - bad because you think you have transactions but u havent
2) Create your own - then you should have some parameters that will be different that regular "startTransaction" method.
And on top of that sometimes simple interfaces just doesnt work.
Example: You need additional parameters for different implementations.
If you're exposing the concept of transactions on your interface, then you must functionally support transactions no matter what, since users of the interface will logically depend on it. I.e., if a caller can start a transaction, then they expect to also be able to roll back a transaction of several queries. Since Mongo doesn't natively have any concept of rolling back transactions, there's one of two possibilities:
You implement the possibility of rolling back queries in code, emulating the functionality of transactions for a database which doesn't natively support it. (Whether that's even reliably possible in Mongo is a debatable topic.)
Your interface is working at the wrong level of abstraction. If your interface is promising functionality an implementation can't deliver, then either the interface or the implementation is unrealistic.
In practice, Mongo and SQL databases are such different beasts that you would either never make this kind of change without changing large parts of your business logic around it; or you specify your interface using an extremely minimal common-denominator interface only, most certainly not exposing technology-specific concepts on an abstract interface.
You are mostly correct, interfaces can be very useful, but also problematic in (fast) changing code, a best practice conserning interfaces is to keep them as small as possible.
When something can handle an transaction, create an interface only for handling an transaction. Split them up in as small as logically possible parts, in that way, when new classes emerge, you can assign them the specific interfaces that can determine their methods.
For the multiple parameter problem, this can indeed be problematic, see if you can determine if this specific value could be moved to a constructor, or indicates that the action that you are doing is indeed sightly different from the action that does not need this parameter.
I hope this helps, goodluck
You are right interfaces are used to add new implementations via contract but those implementations have to posses some similarity.
Let's take an example:
You cannot implement dog using human interface because dog is a living organism.
Same thing you are trying to do here.You are trying to implement a non-sql db using sql db implementation.
When I write Java code, I found annotation based libraries are very popular, e.g. hibernate, Jackson, Gson, Spring-MVC. But in Scala, most of the popular libraries are not providing annotations, or provided but recommend non-annotation approaches, e.g. squerly, slick, argonaut, unfiltered, etc.
Sometimes, I found the annotations are easier to read and maintain, but why people are not so interested in them?
One reason is that annotations often have to be used at declaration-site. Hence, you have to "pollute" your domain models with code not relevant to your business logic. Solutions based on macros or type classes on the other hand are usually applied on use-site. This allows higher reusability of your domain models.
E.g., what if you need different serialization logic for different tasks? With annotations you have usually no other choice than implementing an additional representation of your model with modified annotations. With type classes (probably automatically derived through macros), you have to just implement another instance and inject it accordingly to your needs.
Macros and implicits can often be used as a substitute for annotations and have the benefit of being statically checked.
Introduction
I am working on an API written in Scala. I use data transfer objects (DTOs) as parameters passed to the API's functions. The DTOs will be instanciated by the API's user.
As the API is pretty abstract / generic I want to specify the attributes of a object that the API should operate on. Example:
case class Person(name: String, birthdate: Date)
When an instance of Person "P" is passed to the API, the API needs to know the attributes of "P" it should operate on: either just name or birthdate, or both of them.
So I need to design a DTO that contains the instance of "P" itself, some kind of declaration of the attributes and maybe additional information on the type of "P".
String based approach
One way would be to use Strings to specify the attributes of "P" and maybe its type. This would be relatively simple, as Strings are pretty lightweight and well known. As there is a formal notation of packages, types and members as Strings, the declarations would structured to a certain degree.
On the other side, the String-declarations must be validated, because a user might pass invalid Strings. I could imagine types that represent the attributes with dedicated types instead of String, which may have the benefit of increased structure and maybe even those type are designed so that only valid instances can exist.
Reflection API approach
Of course the reflection API came to my mind and I am experimenting to declare the attributes with types out of the reflection API. Unfortunately the scala 2.10.x reflection API is a bit unintuitive. There are names, symbols, mirrors, types, typetags which can cause a bit of confusion.
Basically I see two alternatives to attribute declaration with Strings:
Attribute declaration with reflection API's "Names"
Attribute declaration with reflection API's "Symbols" (especially TermSymbol)
If I go this way, as far as I can see, the API's user, who constructs the DTOs, will have to deal with the reflection API and its Names / Symbols. Also the API's implementation will have to make use of the reflection API. So there are two places with reflective code and the user must have at least a little bit of knowledge of the reflection API.
Questions
However I don't know how heavyweight these approaches are:
Are Names or Symbols expensive to construct?
Does the reflection API do any caching of expensive operation results or should I take care about that?
Are Names and Symbols transferable to another JVM via network?
Are they serializable?
Main question: Are scala reflection API Names or Symbols adequate for use inside transfer objects?
It seems complicated to do this with the reflection API. Any hints are welcome. And any hints on other alternatives, too.
P.S.: I did not include my own code, yet, because my API is complex and the reflection part is in pretty experimental state. Maye I can deliver something useful later.
1a) Names are easy to construct and are lightweight, as they are just a bit more than strings.
1b) Symbols can't be constructed by the user, but are created internally when one resolves names using APIs like staticClass or member. First calls to such APIs usually involve unpacking type signatures of symbol's owners from ScalaSignature annotations, so they might be costly. Subsequent calls use already loaded signatures, but still pay the cost of a by-name lookup in a sort of a hashtable (1). declaration costs less than member, because declaration doesn't look into base classes.
2) Type signatures (e.g. lists of members of classes, params + return type of methods, etc) are loaded lazily and therefore are cached. Mappings between Java and Scala reflection artifacts are cached as well (2). To the best of my knowledge, the rest (e.g. subtyping checks) is generally uncached with a few minor exceptions.
3-4) Reflection artifacts depend on their universe and at the moment can't be serialized (3).
Are interfaces a layer between objects(different objects) and actions(different object types trying to perform same action)? and Interface checks what kind of object is it and how it can perform a particular action?
I'd say that it's better to think of an interface as a promise. In Java there is the interface construct that allows for inheritance of an API, but doesn't specify behavior. In general though, an interface is comprised of the methods an object presents for interacting with the object.
In duck-typed languages, if an object presents a particular set of methods (the interface) specific to a particular class, then that object is like the specifying class.
Enforcement of interface is complicated, since you need to specify some set of criteria for behavior. An interesting example would the design-by-contract ideas in Eiffel.
Are you asking about the term "interface" as used in a specific language (such as Java or Objective-C), or the generic meaning of the term?
If the latter, then an "interface" can be almost anything. Pour oil on water -- the line between them is an "interface". An interface is any point where two separate things meet and interact.
The term does not have a rigorous definition in computing, but refers to any place where two relatively distinct domains interact.
To understand interfaces in .net or Java, one must first recognize that inheritance combines two concepts:
Implementations of the derived type will include all fields (including private ones) of the base type, and can access any and all public or protected members of the base type as if it were its own.
Objects of the derived type may be freely used in place of objects of the base type.
Allowing objects to use members of more than one base type as their own is complicated. Some languages provide ways of doing so, but there can often be confusion as to which portion of which base object is being referred to, especially if one is inheriting from two classes which independently inherit from a third. Consequently, many frameworks only allow objects to inherit from one base object.
On the other hand, allowing objects to be substitutable for more than one other type of object does not create these difficulties. An object representing a database table may, for example, allow itself to be passed to a routine that wants a "thing that can enumerate contents, which are of type T (IEnumerable<T> in .net)", or a routine that wants a "thing that can have things of type T added to it" (ICollection<T> in .net), or a thing that wants a "thing that wants to know when it's no longer needed (IDisposable in .net)". Note that there are some things that want notification when they're no longer needed that do not represent enumerable collections, and there are other things that represent enumerable collections that can be abandoned without notification. Thus, neither type of object could inherit from the other, but if one uses an interface to represent "things which can enumerate their contents, which are of type T", or "things that want to know when they are no longer needed", then there's no problem having classes implement both interfaces.