Scala case classes have a limit of 22 fields in the constructor. I want to exceed this limit, is there a way to do it with inheritance or composition that works with case classes?
More recently (Oct 2016, six years after the OP), the blog post "Scala and 22" from Richard Dallaway explores that limit:
Back in 2014, when Scala 2.11 was released, an important limitation was removed:
Case classes with > 22 parameters are now allowed.
That said, there still exists a limit on the number of case class fields, please see https://stackoverflow.com/a/55498135/1586965
This may lead you to think there are no 22 limits in Scala, but that’s not the case. The limit lives on in functions and tuples.
The fix (PR 2305) introduced in Scala 2.11 removed the limitation for the above common scenarios: constructing case classes, field access (including copying), and pattern matching (baring edge cases).
It did this by omitting unapply and tupled for case classes above 22 fields.
In other words, the limit to Function22 and Tuple22 still exists.
Working around the Limit (post Scala 2.11)
There are two common tricks for getting around this limit.
The first is to use nested tuples.
Although it’s true a tuple can’t contain more than 22 elements, each element itself could be a tuple
The other common trick is to use heterogeneous lists (HLists), where there’s no 22 limit.
If you want to make use of case classes, you may be better off using the shapeless HList implementation. We’ve created the Slickless library to make that easier. In particular the recent mappedWith method converts between shapeless HLists and case classes. It looks like this:
import slick.driver.H2Driver.api._
import shapeless._
import slickless._
class LargeTable(tag: Tag) extends Table[Large](tag, "large") {
def a = column[Int]("a")
def b = column[Int]("b")
def c = column[Int]("c")
/* etc */
def u = column[Int]("u")
def v = column[Int]("v")
def w = column[Int]("w")
def * = (a :: b :: c :: /* etc */ :: u :: v :: w :: HNil)
.mappedWith(Generic[Large])
}
There’s a full example with 26 columns in the Slickless code base.
This issue is going to be fixed in Scala 2.11.
Build a normal class that acts like a case class.
I still use scala 2.10.X since that is what is the latest supported by Spark, and in Spark-SQL I make heavy use of case classes.
The workaround for case classes with more than 22 fields:
class Demo(val field1: String,
val field2: Int,
// .. and so on ..
val field23: String)
extends Product
//For Spark it has to be Serializable
with Serializable {
def canEqual(that: Any) = that.isInstanceOf[Demo]
def productArity = 23 // number of columns
def productElement(idx: Int) = idx match {
case 0 => field1
case 1 => field2
// .. and so on ..
case 22 => field23
}
}
It's interesting your constructor is that loaded, but you could package related values into a case class of their own.
So while you might have
case class MyClass(street: String, city: String, state: String, zip: Integer)
you can do this
case class MyClass(address: Address)
You have other options too:
Group items into tuples
Create your own Function23 trait (or whatever)
Use currying
UPDATE: As others have noted, this is no longer an issue after the release of Scala 2.11--though I would hesitate to use the term "fix." However, the "Catch 22," if you will, sometimes still shows up in third-party Scala libraries.
When you have that many values, it's usually a sign that your design needs to be reworked anyways.
Form intermittent case classes that then aggregate into the larger one. This also makes the code much easier to understand, reason about, and maintain. As well as bypassing this issue you are having.
For example, if I wanted to store user data I might do this....
case class User(name: Name, email: String)
case class Name(first: String, last: String)
With so few things, this of course wouldn't be necessary. But if you have 22 things you are trying to cram into one class, you'll want to do this sort of intermittent case class-work anyways.
Related
I have the following case class:
case class Example[T](
obj: Option[T] | T = None,
)
This allows me to construct it like Example(myObject) instead of Example(Some(myObject)).
To work with obj I need to normalise it to Option[T]:
lazy val maybeIn = obj match
case o: Option[T] => o
case o: T => Some(o)
the type test for Option[T] cannot be checked at runtime
I tried with TypeTest but I got also warnings - or the solutions I found look really complicated - see https://stackoverflow.com/a/69608091/2750966
Is there a better way to achieve this pattern in Scala 3?
I don't know about Scala3. But you could simply do this:
case class Example[T](v: Option[T] = None)
object Example {
def apply[T](t: T): Example[T] = Example(Some(t))
}
One could also go for implicit conversion, regarding the specific use case of the OP:
import scala.language.implicitConversions
case class Optable[Out](value: Option[Out])
object Optable {
implicit def fromOpt[T](o: Option[T]): Optable[T] = Optable(o)
implicit def fromValue[T](v: T): Optable[T] = Optable(Some(v))
}
case class SomeOpts(i: Option[Int], s: Option[String])
object SomeOpts {
def apply(i: Optable[Int], s: Optable[String]): SomeOpts = SomeOpts(i.value, s.value)
}
println(SomeOpts(15, Some("foo")))
We have a specialized Option-like type for this purpose: OptArg (in Scala 2 but should be easily portable to 3)
import com.avsystem.commons._
def gimmeLotsOfParams(
intParam: OptArg[Int] = OptArg.Empty,
strParam: OptArg[String] = OptArg.Empty
): Unit = ???
gimmeLotsOfParams(42)
gimmeLotsOfParams(strParam = "foo")
It relies on an implicit conversion so you have to be a little careful with it, i.e. don't use it as a drop-in replacement for Option.
The implementation of OptArg is simple enough that if you don't want external dependencies then you can probably just copy it into your project or some kind of "commons" library.
EDIT: the following answer is incorrect. As of Scala 3.1, flow analysis is only able to check for nullability. More information is available on the Scala book.
I think that the already given answer is probably better suited for the use case you proposed (exposing an API can can take a simple value and normalize it to an Option).
However, the question in the title is still interesting and I think it makes sense to address it.
What you are observing is a consequence of type parameters being erased at runtime, i.e. they only exist during compilation, while matching happens at runtime, once those have been erased.
However, the Scala compiler is able to perform flow analysis for union types. Intuitively I'd say there's probably a way to make it work in pattern matching (as you did), but you can make it work for sure using an if and isInstanceOf (not as clean, I agree):
case class Example[T](
obj: Option[T] | T = None
) {
lazy val maybeIn =
if (obj.isInstanceOf[Option[_]]) {
obj
} else {
Some(obj)
}
}
You can play around with this code here on Scastie.
Here is the announcement from 2019 when flow analysis was added to the compiler.
I though if something like this makes sense in scala:
object CaseClassUnion extends App {
case class HttpConfig(bindUrl: String, port: String)
case class DbConfig(url: String, usr: String, pass: String)
val combined: HttpConfig with DbConfig = ???
//HttpConfig("0.0.0.0", "21") ++ DbConfig("localhost", "root", "root")
//would be nice to have something like that
}
At least this compiles... Is there a way, probably with macros magic to achieve union of two classes given their instances?
In zio I believe there is something like in reverse:
val live: ZLayer[ProfileConfiguration with Logging, Nothing, ApplicationConfiguration] =
ZLayer.fromServices[ProfileConfigurationModule.Service, Logger[String], Service] { (profileConfig, logger) => ???
where we convert ProfileConfiguration with Logging to function of ProfileConfigurationModule.Service, Logger[String] => Service
Several things.
When you have several traits combined with with Scala does a trait linearization to combine them into one class with a linear hierarchy. But that's true for traits which doesn't have constructors!
case class (which is not a trait) cannot be extended with another case class (at all) because that would break contracts like:
case class A(a: Int)
case class B(a: Int, b: String) extends A(a)
A(1) == B(1, "") // because B is A and their respective fields match
B(1, "") != A(1) // because A is not B
B(1, "").hashCode != A(1).hashCode // A == B is true but hashCodes are different!
which means that you cannot even generate case class combination manually. You want to "combine" them, use some product: a tuple, another case class, etc.
If you are curious about ZIO it:
uses traits
uses them as some sort of type-level trick to represent an unordered set of dependencies, where type inference would calculate set sum when you combine operations and some clever trickery to remove traits from the list using .provide to remove dependency from the set
ZLayers are just making these shenanigans easier
so and if you even pass there some A with B you either combined it yourself by using cake pattern, or you passed dependencies one by one. ZIO developer might never be faced with the problem of needing some macro to combine several case classes (.provide(combineMagically(A, B, C, D, ...)) as they could pass implementations of each dependency one by one (.provide(A).provide(B)) and the code underneath would never need the combination of these types as one value - it's just a compile-time trick that might never translate to the requirement of an actual value of type A with B with C with D ....
TL;DR: You cannot generate a combination of 2 case classes; ZIO uses compound types as some sort of type-level set to trace dependencies and it doesn't actually require creating values of the compound types.
I am trying to use Spray-JSON to marshall an incoming JSON with more than 22 fields. Since there is no JsonFormat23() method, I am having to nest my case classes to get around the limitation. However, the incoming JSON does not know of the nested structure.
Is there a way to avoid using nested structure in Spray Json?
EDIT
Here is my solution so that others do not have the same pain. One of my issues was that all my fields were optional, which added another layer of complexity. You can put as many fields as you want in this solution
implicit object myFormat extends RootJsonFormat[myFormat] {
override def write(js : myFormat):JsValue =
JsObject(
List(
Some("language" -> js.language.toJson),
Some("author" -> js.author.toJson),
....
).flatten: _*
)
override def read(json: JsValue):myFormat= {
val fieldNames = Array("language", ... , "author")
val jsObject = json.asJsObject
jsObject.getFields(fieldNames:_*)
// code to transform fields to case class
// Initializes class with list of parameters
myFormat.getClass.getMethods.find(x => x.getName == "apply" && x.isBridge)
.get.invoke(myFormat, mylist map (_.asInstanceOf[AnyRef]): _*).asInstanceOf[myFormat]
}
}
You can implement RootJsonFormat as described here to work around Tupple22 and Function22 limitations. There is no limit of 22 parameters in case classes anymore (with caveats) so you can keep your class structure flat. You don't even have to use case class as target deserialization type when implementing RootJsonFormat, it could be a regular class instead.
Note that even though you can get your JSON get parsed into a case class, there might be other limitations of 22 you could face in your code. See this for explanation.
For example, you get your case class and now want to save it to DB and your DB framework can't work around 22 parameters limitation without custom serializer. In that case converting to nested case classes might be simpler.
In Dotty the limit of 22 will be completely gone, but that will take some time:
The limit of 22 for the maximal number of parameters of function types
has been dropped. Functions can now have an arbitrary number of
parameters. Functions beyond Function22 are represented with a new
trait scala.FunctionXXL.
The limit of 22 for the size of tuples is about to be dropped. Tuples
will in the future be represented by an HList-like structure which can
be arbitrarily large.
Part of a current project involves converting from types coupled to a database and a generic type used when serializing the results out to clients via Json, the current implementation in Scala uses type inference to correctly perform the transformation, using Scala's TypeTag:
def Transform[A: TypeTag](objects:Seq[A]):Seq[Children] = typeOf[A] match {
case pc if pc =:= typeOf[ProductCategory] =>
TransformProductCategory(objects.asInstanceOf[Seq[ProductCategory]])
case pa if pa =:= typeOf[ProductArea] =>
TransformProductArea(objects.asInstanceOf[Seq[ProductArea]])
case pg if pg =:= typeOf[ProductGroup] =>
TransformProductGroup(objects.asInstanceOf[Seq[ProductGroup]])
case psg if psg =:= typeOf[ProductSubGroup] =>
TransformProductSubGroup(objects.asInstanceOf[Seq[ProductSubGroup]])
case _ =>
throw new IllegalArgumentException("Invalid transformation")
}
The types used as input are all case classes and are defined internally within the application, for example:
case class ProductCategory(id: Long, name: String,
thumbnail: Option[String],
image:Option[String],
sequence:Int)
This approach, although suitable at the moment, doesn't feel functional or scalable when potentially more DB types are added. I also feel using asInstanceOf should be redundant as the type has already been asserted. My limited knowledge of implicits suggests they could be used instead to perform the transformation, and remove the need for the above Transform[A: TypeTag](objects:Seq[A]):Seq[Children] method altogether. Or maybe there is a different approach I should have used instead?
You can define a trait like this:
trait Transformer[A] {
def transformImpl(x: Seq[A]): Seq[Children]
}
Then you can define some instances:
object Transformer {
implicit val forProduct = new Transformer[ProductCategory] {
def transformImpl(x: Seq[ProductCategory]) = ...
}
...
}
And then finally:
def transform[A: Transformer](objects:Seq[A]): Seq[Children] =
implicitly[Transformer[A]].transformImpl(objects)
Preferably, you should define your implicit instances either in Transformer object or in objects corresponding to your category classes.
I'm not sure how your program is supposed to work exactly, nor do I know if any of your Transforms are self-made types or something you pulled from a library, however I may have a solution anyway
Something match is really good with, is case classes
So rather than manually checking the type of the input data, you could maybe wrap them all in case classes (if you need to bring data with you) or case objects (if you don't)
That way you can do something like this:
// this code assumes ProductCategory, ProductArea, etc.
// all extends the trait ProductType
def Transform(pType: ProductType): Seq[Children] = pType match {
case ProductCategory(objects) => TransformProductCategory(objects)
case ProductArea(objects) => TransformProductArea(objects)
case ProductGroup(objects) => TransformProductGroup(objects)
case ProductSubGroup(objects) => TransformProductSubGroup(objects)
}
By wrapping everything in a case class, you can specify exactly which type of data you want to bring along, and so long as they (the case classes, not the data) all inherit from the same class/trait, you should be fine
And since there's only a little handful of classes that extend ProductType you don't need the default case, because there is no default case!
Another benefit of this, is that it's infinitely expandable; just add more cases and case classes!
Note that this solution requires you to refactor your code a LOT, so bare that in mind before you throw yourself into it.
I'm trying to build an internal DSL in Scala. I have the following types:
case class A(name:String)
case class Group(list:A*) // it can also be list:List[A]
Creating a group of A's using the normal syntax is as follows:
val group1 = Group(A("a1"), A("a2"), ...)
which is quite ugly. I would like to present a group as (A("a1"), A("a2"), ...) and possibly later ("a1", "a2", ...), if possible.
I could not figure it out myself how to convert (A("a1"), A("a2"), ...) to Group(A("a1"), A("a2"), ...). It would be nice if we can convert (A("a1"), A("a2"), ...) to an instance of class Group. (I don't care if I can't specify unlimited number of A's inside. Maximum 8 A's will suffice)
So my question is: Is there a way to convert a tuple to a specific instance of a class? If not, how would you solve this problem?
First of all, solving something for tuples of any arity in Scala is nasty because the tuple classes are distinct, so you'd have to have 8 or 22 or whatever arity you want to support conversions.
But anyways, tuples are for heterogeneously typed things, whereas you here require a collection of a common type. So while tuple syntax may look nice, I would recommend not to try to use it for this case in your DSL. Stick to collections or just alias your Group type, even at the price of an additional character, e.g.
object A {
implicit def fromString(name: String) = A(name)
}
case class A(name: String)
case class Group(elem: A*)
val G = Group
G("a1", "a2")
If you really want to support tuples, the following will do:
object Group {
implicit def fromTuple2[A1 <% A, A2 <% A](t: (A1, A2)) = Group(t._1, t._2)
}
case class Group(elem: A*)
("a1", "a2"): Group