Python nested dataclasses ...is this valid? - python-3.7

Background
I'm using dataclasses to create a nested data structure, that I use to represent a complex test output.
Previously I'd been creating a hierarchy by creating multiple top-level dataclasses and then using composition:
from dataclasses import dataclass
#dataclass
class Meta:
color: str
size: float
#dataclass
class Point:
x: float
y: float
stuff: Meta
point1 = Point(x=5, y=-5, stuff=Meta(color='blue', size=20))
Problem
I was wondering if there was a way of defining the classses in a self-contained way, rather than polluting my top-level with a bunch of lower-level classes.
So above, the definition of Point dataclass contains the definition of Meta, rather than the definition being at the top level.
Solution?
I wondered if it's possible to use inner (dataclass) classes with a dataclass and have things all work.
So I tried this:
rom dataclasses import dataclass
from typing import get_type_hints
#dataclass
class Point:
#dataclass
class Meta:
color: str
size: float
#dataclass
class Misc:
elemA: bool
elemB: int
x: float
y: float
meta: Meta
misc: Misc
point1 = Point(x=1, y=2,
meta=Point.Meta(color='red', size=5.5),
misc=Point.Misc(elemA=True, elemB=-100))
print("This is the point:", point1)
print(point1.x)
print(point1.y)
print(point1.meta)
print(point1.misc)
print(point1.meta.color)
print(point1.misc.elemB)
point1.misc.elemB = 99
print(point1)
print(point1.misc.elemB)
This all seems to work - the print outputs all work correctly, and the assignment to a (sub) member element works as well.
You can even support defaults for nested elements:
from dataclasses import dataclass
#dataclass
class Point:
#dataclass
class Meta:
color: str = 'red'
size: float = 10.0
x: float
y: float
meta: Meta = Meta()
pt2 = Point(x=10, y=20)
print('pt2', pt2)
...prints out red and 10.0 defaults for pt2 correctly
Question
Is this a correct way to implement nested dataclasses?
(meaning it's just not lucky it works now, but would likely break in future? ...or it's just fugly and Not How You Do Things? ...or it's just Bad?)
...It's certainly a lot cleaner and a million times easier to understand and upport than a gazillion top-level 'mini' dataclasses being composed together.
...It's also a lot easier than trying to use marshmellow or jerry-rigging a json schema to class structure model.
...It also is very simple (which I like)

You can just use strings to annotate classes that don't exist yet:
from dataclasses import dataclass
#dataclass
class Point:
x: float
y: float
stuff: "Meta"
#dataclass
class Meta:
color: str
size: float
point1 = Point(x=5, y=-5, stuff=Meta(color='blue', size=20))
That way, you can reorder class definitions in the way that makes most sense. Static type checkers like mypy also respect this way of forward references, which are part of the initial pep on type annotation, so nothing exotic. Nesting the classes also solves the problem but is imo harder to read, since flat is better than nested.

Related

correct setup for opaque type with underlying Numeric/Ordering instances

unclear to me if this is in fact the same question as here or here, apologies if this is a duplicate.
i would like to define a type Ordinate which is simply an Int under-the-hood:
package world
opaque type Ordinate = Int
given Ordering[Ordinate] with {
def compare(x: Ordinate, y: Ordinate): Int = x.compare(y)
}
i would like to be able to leverage the Numeric[Int] and Ordering[Int] methods so that it would be easy to define methods such as
package world
import Ordinate.given
class Boundary(dims: List[(Ordinate, Ordinate)]) {
def contains(o: Ordinate, dimension: Int): Boolean = {
val (min, max) = dims(dimension)
min <= o && o <= max
}
}
...forgetting for the meantime that this would blow up if dims was empty, dimension < 0 or dims.length <= dimension.
when i try and set this up, i get compiler errors at the call site:
value <= is not a member of world.Ordinate, but could be made available as an extension method.
One of the following imports might fix the problem:
import world.given_Ordering_Ordinate.mkOrderingOps
import math.Ordering.Implicits.infixOrderingOps
import math.Ordered.orderingToOrdered
more generally, it would be wicked cool if this were the case without any special given imports for files in the same package as Ordinate and even better, across the codebase. but that may be an anti-pattern that i've carried forward from my Scala 2 coding.
explicit given imports may be a better pattern but i'm still learning Scala 3 from Scala 2 here. i know if i created an implicit val o = Ordering.by(...) in the companion object of Ordinate in Scala 2, with Ordinate as a value class, i would get the effect i'm looking for (zero-cost type abstraction + numeric behaviors).
anyhow, i'm guessing i'm just missing a small detail here, thank you for reading and for any help.
Scala 3 has revised the rules for infix operators so that the author must (explicitly) expose infix operations such as x: T <= y: T for some custom type T.
I've found two ways to address this for an opaque type, both with drawbacks:
at the call site, have import math.Ordering.Implicits.infixOrderingOps in scope, which brings in a given instance that converts Ordering[T] into infix comparators. drawback: any file that wants these comparators needs the import line, adding more import boilerplate as the number of files using this opaque type increases.
package world
import Ordinate.given
import math.Ordering.Implicits.infixOrderingOps // <-- add this line
class Boundary(dims: List[(Ordinate, Ordinate)]) {
def contains(o: Ordinate, dimension: Int): Boolean = {
val (min, max) = dims(dimension)
min <= o && o <= max
}
}
add an infix extension method for each comparator you want to expose. drawback here is boilerplate of having to write out the very thing we're trying not to duplicate in each file.
type Ordinate = Int
object Ordinate {
extension (o: Ordinate) {
infix def <=(x: Ordinate): Boolean = o <= x // <-- add 'infix' here
}
}
i'm guessing for those more experienced with large programs, these drawbacks are better than the drawbacks associated with anything more than this least permission approach to givens. but this still doesn't seem to deliver on the promise of opaque types as a zero-cost abstraction for numeric types. what seems to be missing is something like "import a given and treat it's methods as infix for my type".

Scala var best practice - Encapsulation

I'm trying to understand what's the best practice for using vars in scala, for example
class Rectangle() {
var x:Int = 0
}
Or something like:
class Rectangle() {
private var _x:Int = 0
def x:Int = _x
def x_(newX:Int):Unit = _x=newX
}
Which one can be considered as better? and why?
Thank you!
As Luis already explained in the comment, vars are something that should be avoided whenever you are able to avoid it, and such a simple case like you gave is one of those that can be better designed using something like this:
// Companion object is not necessary in your case
object Rectangle {
def fromInt(x: Int): Option[Rectangle] = {
if(x > 0) {
Some(Rectangle(x))
} else None
}
final case class Rectangle(x: Int)
It would be very rare situations when you can't avoid using vars in scala. Scala general idiom is: "Make your variables immutable, unless there is a good reason not to"
I'm trying to understand what's the best practice for using vars in scala, […]
Best practice is to not use vars at all.
Which one can be considered as better? and why?
The second one is basically equivalent to what the compiler would generate for the first one anyway, so it doesn't really make sense to use the second one.
It would make sense if you wanted to give different accessibility to the setter and the getter, something like this:
class Rectangle {
private[this] var _x = 0
def x = _x
private def x_=(x: Int) = _x = x
}
As you can see, I am using different accessibility for the setter and the getter, so it makes sense to write them out explicitly. Otherwise, just let the compiler generate them.
Note: I made a few other changes to the code:
I changed the visibility of the _x backing field to private[this].
I changed the name of the setter to x_=. This is the standard naming for setters, and it has the added advantage that it allows you to use someRectangle.x = 42 syntactic sugar to call it, making it indistinguishable from a field.
I added some whitespace to give the code room to breathe.
I removed some return type annotations. (This one is controversial.) The community standard is to always annotate your return types in public interfaces, but in my opinion, you can leave them out if they are trivial. It doesn't really take much mental effort to figure out that 0 has type Int.
Note that your first version can also be simplified:
class Rectangle(var x: Int = 0)
However, as mentioned in other answers, you really should make your objects immutable. It is easy to create a simple immutable data object with all the convenience functions generated automatically for you by using a case class:
final case class Rectangle(x: Int = 0)
If you now want to "change" your rectangle, you instead create a new one which has all the properties the same except x (in this case, x is the only property, but there could be more). To do this, Scala generates a nifty copy method for you:
val smallRectangle = Rectangle(3)
val enlargedRectangle = smallRectangle.copy(x = 10)

Using a Lens on a non-case class extending something with a constructor in Scala

I am probably thinking about this the wrong way, but I am having trouble in Scala to use lenses on classes extending something with a constructor.
class A(c: Config) extends B(c) {
val x: String = doSomeProcessing(c, y) // y comes from B
}
I am trying to create a Lens to mutate this class, but am having trouble doing so. Here is what I would like to be able to do:
val l = Lens(
get = (_: A).x,
set = (c: A, xx: String) => c.copy(x = xx) // doesn't work because not a case class
)
I think it all boils down to finding a good way to mutate this class.
What are my options to achieve something like that? I was thinking about this in 2 ways:
Move the initialization logic into a companion A object into a def apply(c: Config), and change the A class to be a case class that gets created from the companion object. Unfortunately I can't extend from B(c) in my object because I only have access to c in its apply method.
Make x a var. Then in the Lens.set just A.clone then set the value of x then return the cloned instance. This would probably work but seems pretty ugly, not to mention changing this to a var might raise a few eyebrows.
Use some reflection magic to do the copy. Not really a fan of this approach if I can avoid it.
What do you think? Am I thinking about this really the wrong way, or is there an easy solution to this problem?
This depends on what you expect your Lens to do. A Lens laws specify that the setter should replace the value that the getter would get, while keeping everything else unchanged. It is unclear what is meant by everything else here.
Do you wish to have the constructor for B called when setting? Do you which the doSomeProcessing method called?
If all your methods are purely functional, then you may consider that the class A has two "fields", c: Config and x: String, so you might as well replace it with a case class with those fields. However, this will cause a problem while trying to implement the constructor with only c as parameter.
What I would consider is doing the following:
class A(val c: Config) extends B(c) {
val x = doSomeProcessing(c, y)
def copy(newX: String) = new A(c) { override val x = newX }
}
The Lens you wrote is now perfectly valid (except for the named parameter in the copy method).
Be careful if you have other properties in A which depend on x, this might create an instance with unexpected values for these.
If you do not wish c to be a property of class A, then you won't be able to clone it, or to rebuild an instance without giving a Config to your builder, which Lenses builder cannot have, so it seems your goal would be unachievable.

Base class reference in Scala

I have to store a set of shape classes (say squares and circles) in a single array/set in scala.
In C++, we can store pointers to objects of derived class in a base class pointer.
std::vector<shape*> list;
shape* temp = new square;
list.push_back(temp);
Is such a thing possible in Scala? If so how does that code look?
Scala is an OO language, so why would that be a problem?
trait Shape
case class Square(x: Int, y: Int, w: Int, h: Int) extends Shape
import scala.collection.mutable.ArrayBuffer
val list = new ArrayBuffer[Shape]
list += new Square(0,0,10,10)
In Scala, as in Java, array and list does not store actual elements, they only store references to actual elements(non primitives). That means you can do same thing with arrays.
Java arrays do store some actual elements of primitive data types. Scala 2.8+ has #specialized feature which prevents from boxing of "primitives"

Is it (really that) bad to use case-classes for mutable state?

Consider the following code:
case class Vector3(var x: Float, var y: Float, var z: Float)
{
def add(v: Vector3): Unit =
{
this.x += v.x
this.y += v.y
this.z += v.z
}
}
As you can see, the case class holds mutable state. It is highly discouraged to do this and normally I'd agree and absolutely stick to this "rule", but here goes the whole story.
I'm using Scala to write a little 3d-game-engine from scratch. So first I thought about using a (much more) functional style, but then the garbage-collector would kick in too often.
Think about it for a moment: I have dozens and dozens of entities in a test-game. All of them have a position (Vector3), an orientation (Vector3), a scale (Vector3) and a whole lot of Matrices too. If I was about to go functional in these classes (Vector3 and Matrix4) and make them immutable I would return hundreds of new objects each frame, resulting in a huge fps-loss because, let's face it, GC has its uses, but in a game-engine and with OpenGL... not so much.
Vector3 was a class before, but it is a case class now, because somewhere in the code I need pattern-matching for it.
So, is it really that bad to use a case-class that holds mutable state?
Please do not turn this into a discussion about "Why do you even use Scala for a project such as that?" I know that there may be better alternatives out there, but I'm not interested in writing (yet another) engine in C++, nor am I too eager to dive into Rust (yet).
I would say it is bad to use case classes with mutable state, but only because they override your equals and hashCode methods. Somewhere in your code you may check whether a == b and find out that they are equal. Later they may be different because they are mutable. In the very least, they are dangerous to use in combination with hash-based collections.
However, you don't seem to be in need of all functionality a case class provides. What you really seem to require is an extractor for pattern matching, so why not define it? Furthermore, the static factory apply, and a readable toString-representation may be convenient, so you could implement them.
How about:
class Vector (var x: Float, var y: Float, var z: Float) {
override def toString = s"Vector($x, $y, $z)"
}
object Vector {
def apply(x: Float, y: Float, z: Float) = new Vector(x, y, z)
def unapply(v: Vector): Option[(Float, Float, Float)] = Some((v.x, v.y, v.z))
}