I'm currently trying to apply a more functional programming style to a project involving low-level (LWJGL-based) GUI development. Obviously, in such a case it is necessary to carry around a lot of state, which is mutable in the current version. My goal is to eventually have a completely immutable state, in order to avoid state changes as side effect. I studied scalaz's lenses and state monads for awhile, but my main concern remains: All these techniques rely on copy-on-write. Since my state has both a large number of fields and also some fields of considerable size, I'm worried about performance.
To my knowledge the most common approach to modify immutable objects is to use the generated copy method of a case class (this is also what lenses do under the hood). My first question is, how this copy method is actually implemented? I performed a few experiments with a class like:
case class State(
innocentField: Int,
largeMap: Map[Int, Int],
largeArray: Array[Int]
)
By benchmarking and also by looking at the output of -Xprof it looks like updating someState.copy(innocentField = 42) actually performs a deep copy and I observe a significant performance drop when I increase the size of largeMap and largeArray. I was somehow expecting that the newly constructed instance shares the object references of the original state, since internally the reference should just get passed to the constructor. Can I somehow force or disable this deep copy behaviour of the default copy?
While pondering on the copy-on-write issue, I was wondering whether there are more general solutions to this problem in FP, which store changes of immutable data in a kind of incremental way (in the sense of "collecting updates" or "gathering changes"). To my surprise I could not find anything, so I tried the following:
// example state with just two fields
trait State {
def getName: String
def getX: Int
def setName(updated: String): State = new CachedState(this) {
override def getName: String = updated
}
def setX(updated: Int): State = new CachedState(this) {
override def getX: Int = updated
}
// convenient modifiers
def modName(f: String => String) = setName(f(getName))
def modX(f: Int => Int) = setX(f(getX))
def build(): State = new BasicState(getName, getX)
}
// actual (full) implementation of State
class BasicState(
val getName: String,
val getX: Int
) extends State
// CachedState delegates all getters to another state
class CachedState(oldState: State) extends State {
def getName = oldState.getName
def getX = oldState.getX
}
Now this allows to do something like this:
var s: State = new BasicState("hello", 42)
// updating single fields does not copy
s = s.setName("world")
s = s.setX(0)
// after a certain number of "wrappings"
// we can extract (i.e. copy) a normal instance
val ns = s.setName("ok").setX(40).modX(_ + 2).build()
My question now is: What do you think of this design? Is this some kind of FP design pattern that I'm not aware of (apart from the similarity to the Builder pattern)? Since I have not found anything similar, I'm wondering if there is some major issue with this approach? Or are there any more standard ways to solve the copy-on-write bottleneck without giving up immutability?
Is there even a possibility to unify the get/set/mod functions in some way?
Edit:
My assumption that copy performs a deep copy was indeed wrong.
This is basically the same as views and is a type of lazy evaluation; this type of strategy is more or less the default in Haskell, and is used in Scala a fair bit (see e.g. mapValues on maps, grouped on collections, pretty much anything on Iterator or Stream that returns another Iterator or Stream, etc.). It is a proven strategy to avoid extra work in the right context.
But I think your premise is somewhat mistaken.
case class Foo(bar: Int, baz: Map[String,Boolean]) {}
Foo(1,Map("fish"->true)).copy(bar = 2)
does not in fact cause the map to be copied deeply. It just sets references. Proof in bytecode:
62: astore_1
63: iconst_2 // This is bar = 2
64: istore_2
65: aload_1
66: invokevirtual #72; //Method Foo.copy$default$2:()Lscala/collection/immutable/Map;
69: astore_3 // That was baz
70: aload_1
71: iload_2
72: aload_3
73: invokevirtual #76; //Method Foo.copy:(ILscala/collection/immutable/Map;)LFoo;
And let's see what that copy$default$2 thing does:
0: aload_0
1: invokevirtual #50; //Method baz:()Lscala/collection/immutable/Map;
4: areturn
Just returns the map.
And copy itself?
0: new #2; //class Foo
3: dup
4: iload_1
5: aload_2
6: invokespecial #44; //Method "<init>":(ILscala/collection/immutable/Map;)V
9: areturn
Just calls the regular constructor. No cloning of the map.
So when you copy, you create exactly one object--a new copy of what you're copying, with fields filled in. If you have a large number of fields, your view will be faster (as you have to create one new object (two if you use the function application version, since you need to create the function object also) but it has only one field). Otherwise it should be about the same.
So, yes, good idea potentially, but benchmark carefully to be sure it's worth it in your case--you have to write a fair bit of code by hand instead of letting the case class do it all for you.
I tried to write a (quite rough) test for timing performances on your case class copy operation.
object CopyCase {
def main(args: Array[String]) = {
val testSizeLog = byTen(10 #:: Stream[Int]()).take(6).toList
val testSizeLin = (100 until 1000 by 100) ++ (1000 until 10000 by 1000) ++ (10000 to 40000 by 10000)
//warmUp
runTest(testSizeLin)
//test with logarithmic size increments
val times = runTest(testSizeLog)
//test with linear size increments
val timesLin = runTest(testSizeLin)
times.foreach(println)
timesLin.foreach(println)
}
//The case class to test for copy
case class State(
innocentField: Int,
largeMap: Map[Int, Int],
largeArray: Array[Int]
)
//executes the test
def runTest(sizes: Seq[Int]) =
for {
s <- sizes
st = State(s, largeMap(s), largeArray(s))
//(time, state) = takeTime (st.copy(innocentField = 42)) //single run for each size
(time, state) = mean(st.copy(innocentField = 42))(takeTime) //mean time on multiple runs for each size
} yield (s, time)
//Creates the stream of 10^n with n = Naturals+{0}
def byTen(s: Stream[Int]): Stream[Int] = s.head #:: byTen(s map (_ * 10))
//append the execution time to the result
def takeTime[A](thunk: => A): (Double, A) = {
import System.{currentTimeMillis => millis, nanoTime => nanos}
val t0:Double = nanos
val res = thunk
val time = ((nanos - t0) / 1000)
(time, res)
}
//does a mean on multiple runs of the first element of the pair
def mean[A](thunk: => A)(fun: (=> A) => (Double, A)) = {
val population = 50
val mean = ((for (n <- 1 to population) yield fun(thunk)) map (_._1) ).sum / population
(mean, fun(thunk)._2)
}
//Build collections for the requested size
def largeMap(size: Int) = (for (i <- (1 to size)) yield (i, i)).toMap
def largeArray(size: Int) = Array.fill(size)(1)
}
On this machine:
CPU: 64bits dual-core-i5 3.10GHz
RAM: 8GB ram
OS: win7
Java: 1.7
Scala: 2.9.2
I have the following results, which looks like pretty regular to me.
(size, millisecs to copy)
(10,0.4347000000000001)
(100,0.4412600000000001)
(1000,0.3953200000000001)
(10000,0.42161999999999994)
(100000,0.4478600000000002)
(1000000,0.42816000000000015)
(100,0.4084399999999999)
(200,0.41494000000000014)
(300,0.42156000000000016)
(400,0.4281799999999999)
(500,0.42160000000000003)
(600,0.4347200000000001)
(700,0.43466000000000016)
(800,0.41498000000000007)
(900,0.40178000000000014)
(1000,0.44134000000000007)
(2000,0.42151999999999995)
(3000,0.42148)
(4000,0.40842)
(5000,0.38860000000000006)
(6000,0.4413600000000001)
(7000,0.4743200000000002)
(8000,0.44795999999999997)
(9000,0.45448000000000005)
(10000,0.45448)
(20000,0.4281600000000001)
(30000,0.46768)
(40000,0.4676200000000001)
Maybe you have different performance measurements in mind.
Or could it be that your profiled times are actually spent on generating the Map and the Array, instead of copying the case class?
Related
My understanding was that all non-capturing lambdas shouldn't require object creation at use site, because one can be created as a static field and reused. In principle, the same could be true for lambdas constituting of a class method call - only the field would be non static. I never actually tried to dig any deeper into it; now I am looking at the bytecode, don't see one in the enclosing class and don't have a good idea where to look? I see though that the lambda factory is different than in Java, so this should have a clear answer - at least for a given Scala version.
My motivation is simple: profiling is very time consuming. Introducing method values (or in general, lambdas capturing only the state of the enclosing object) as private class fields is less clean and more work than writing them inline and, in general, not good code. But when writing areas known (with high likelihood) to be a hot spot, it's a very simple optimisation that can be performed straight away without any real impact on the programmer's time. It doesn't make sense though if no new object is created anyway.
Take for example:
def alias(x :X) = aliases.getOrElse(x, x)
def alias2(x :X) = aliases.getOrElse(x, null) match {
case null => x
case a => a
}
The first lambda (a Function0) must be a new object because it captures method parameter x, while the second one returns a constant (null) and thus doesn't really have to. It is also less messy (IMO) than a private class field, which pollutes the namespace, but I would like to be able to know for sure - or have a way of easily confirming my expectations.
The following proves that at least some of the time, the answer is "no":
scala 2.13.4> def foo = () => 1
def foo: () => Int
scala 2.13.4> foo eq foo
val res5: Boolean = true
Looking at the bytecode produced by this code:
import scala.collection.immutable.ListMap
object ByName {
def aliases = ListMap("Ein" -> "One", "Zwei" -> "Two", "Drei" -> "Three")
val default = "NaN"
def alias(x: String) = aliases.getOrElse(x, x)
def alias2(x: String) = aliases.getOrElse(x, null) match {
case null => x
case a => a
}
def alias3(x: String) = aliases.getOrElse(x, default)
}
The compiler generates static methods for the by-name parameters. They look like this:
public static final java.lang.String $anonfun$alias$1(java.lang.String);
Code:
0: aload_0
1: areturn
public static final scala.runtime.Null$ $anonfun$alias2$1();
Code:
0: aconst_null
1: areturn
public static final java.lang.String $anonfun$alias3$1();
Code:
0: getstatic #26 // Field MODULE$:LByName$;
3: invokevirtual #138 // Method default:()Ljava/lang/String;
6: areturn
The naive approach would have been for the compiler to generate anonymous classes that implement the Function0 interface. However, this would cause bytecode-bloat. Instead the compiler defers creating these anonymous inner classes until runtime via invokedynamic instructions.
Exactly how Scala uses these invokedynamic instructions is beyond my knowledge. It's possible that they cache the generated Function0 object somehow, but my guess is that the invokedynamic call is sufficiently optimized that it's faster to just generate a new one every time. Allocating short lived objects is cheap, and the cost is most often overestimated. Reusing an existing object might even be slower than creating a new one if it means cache misses.
I also want to point out that this is a implementation detail, and likely to change at any time. The Scala compiler devs and JVM devs know what they are doing, so you are probably better off trusting that their implementation balances performance well.
I am trying implement the fibonacci function in Scala with memoization
One example given here uses a case statement:
Is there a generic way to memoize in Scala?
import scalaz.Memo
lazy val fib: Int => BigInt = Memo.mutableHashMapMemo {
case 0 => 0
case 1 => 1
case n => fib(n-2) + fib(n-1)
}
It seems the variable n is implicitly defined as the first argument, but I get a compilation error if I replace n with _
Also what advantage does the lazy keyword have here, as the function seems to work equally well with and without this keyword.
However I wanted to generalize this to a more generic function definition with appropriate typing
import scalaz.Memo
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
var value : Int = 0
if( n <= 1 ) { value = n; }
else { value = fibonachi(n-1) + fibonachi(n-2) }
return value
}
but I get the following compilation error
cmd10.sc:4: type mismatch;
found : Int => Int
required: Int
def fibonachi(n: Int) : Int = Memo.mutableHashMapMemo[Int, Int] {
^Compilation Failed
Compilation Failed
So I am trying to understand the generic way of adding adding a memoization annotation to a scala def function
One way to achieve a Fibonacci sequence is via a recursive Stream.
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
An interesting aspect of streams is that, if something holds on to the head of the stream, the calculation results are auto-memoized. So, in this case, because the identifier fib is a val and not a def, the value of fib(n) is calculated only once and simply retrieved thereafter.
However, indexing a Stream is still a linear operation. If you want to memoize that away you could create a simple memo-wrapper.
def memo[A,R](f: A=>R): A=>R =
new collection.mutable.WeakHashMap[A,R] {
override def apply(a: A) = getOrElseUpdate(a,f(a))
}
val fib: Stream[BigInt] = 0 #:: fib.scan(1:BigInt)(_+_)
val mfib = memo(fib)
mfib(99) //res0: BigInt = 218922995834555169026
The more general question I am trying to ask is how to take a pre-existing def function and add a mutable/immutable memoization annotation/wrapper to it inline.
Unfortunately there is no way to do this in Scala unless you are willing to use a macro annotation for this which feels like an overkill to me or to use some very ugly design.
The contradicting requirements are "def" and "inline". The fundamental reason for this is that whatever you do inline with the def can't create any new place to store the memoized values (unless you use a macro that can re-write code introducing new val/vars). You may try to work this around using some global cache but this IMHO falls under the "ugly design" branch.
The design of ScalaZ Memo is used to create a val of the type Function[K,V] which is often written in Scala as just K => V instead of def. In this way the produced val can contain also the storage for the cached values. On the other hand syntactically the difference between usage of a def method and of a K => V function is minimal so this works pretty well. Since the Scala compiler knows how to convert def method into a function, you can wrap a def with Memo but you can't get a def out of it. If for some reason you need def anyway, you'll need another wrapper def.
import scalaz.Memo
object Fib {
def fib(n: Int): BigInt = n match {
case 0 => BigInt(0)
case 1 => BigInt(1)
case _ => fib(n - 2) + fib(n - 1)
}
// "fib _" converts a method into a function. Sometimes "_" might be omitted
// and compiler can imply it but sometimes the compiler needs this explicit hint
lazy val fib_mem_val: Int => BigInt = Memo.mutableHashMapMemo(fib _)
def fib_mem_def(n: Int): BigInt = fib_mem_val(n)
}
println(Fib.fib(5))
println(Fib.fib_mem_val(5))
println(Fib.fib_mem_def(5))
Note how there is no difference in syntax of calling fib, fib_mem_val and fib_mem_def although fib_mem_val is a value. You may also try this example online
Note: beware that some ScalaZ Memo implementations are not thread-safe.
As for the lazy part, the benefit is typical for any lazy val: the actual value with the underlying storage will not be created until the first usage. If the method will be used anyway, I see no benefits in declaring it as lazy
I am learning scala and as a good student I try to obey all rules I found.
One rule is: IMMUTABILITY!!!
So I have tried to code everything with immutable data structures and vals, and sometimes this is really hard.
But today I thought to myself: the only important thing is that the object/class should have no mutable state. I am not forced to code all methods in an immutable style, because these methods don't affect each other.
My Question: Am I correct or are there any problems/disadvantages I dont see?
EDIT:
Code example for aishwarya:
def logLikelihood(seq: Iterator[T]): Double = {
val sequence = seq.toList
val stateSequence = (0 to order).toList.padTo(sequence.length,order)
val seqPos = sequence.zipWithIndex
def probOfSymbAtPos(symb: T, pos: Int) : Double = {
val state = states(stateSequence(pos))
M.log(state( seqPos.map( _._1 ).slice(0, pos).takeRight(order), symb))
}
val probs = seqPos.map( i => probOfSymbAtPos(i._1,i._2) )
probs.sum
}
Explanation: It is a method to calculate the log-likelihood of a homogeneous Markov model of variable order. The apply method of state takes all previous symbols and the coming symbol and returns the probability of doing so.
As you may see: the whole method is just multiplying some probabilities which would be much easier using vars.
The rule is not really immutability, but referential transparency. It's perfectly OK to use locally declared mutable variables and arrays, because none of the effects are observable to any other parts of the overall program.
The principle of referential transparency (RT) is this:
An expression e is referentially transparent if for all programs p every occurrence of e in p can be replaced with the result of evaluating e, without affecting the observable result of p.
Note that if e creates and mutates some local state, it doesn't violate RT since nobody can observe this happening.
That said, I very much doubt that your implementation is any more straightforward with vars.
The case for functional programming is one of being concise in your code and bringing in a more mathematical approach. It can reduce the possibility of bugs and make your code smaller and more readable. As for being easier or not, it does require that you think about your problems differently. But once you get use to thinking with functional patterns it's likely that functional will become easier that the more imperative style.
It is really hard to be perfectly functional and have zero mutable state but very beneficial to have minimal mutable state. The thing to remember is that everything needs to done in balance and not to the extreme. By reducing the amount of mutable state you end up making it harder to write code with unintended consequences. A common pattern is to have a mutable variable whose value is immutable. This way identity ( the named variable ) and value ( an immutable object the variable can be assigned ) are seperate.
var acc: List[Int] = Nil
// lots of complex stuff that adds values
acc ::= 1
acc ::= 2
acc ::= 3
// do loop current list
acc foreach { i => /* do stuff that mutates acc */ acc ::= i * 10 }
println( acc ) // List( 1, 2, 3, 10, 20, 30 )
The foreach is looping over the value of acc at the time we started the foreach. Any mutations to acc do not affect the loop. This is much safer than the typical iterators in java where the list can change mid iteration.
There is also a concurrency concern. Immutable objects are useful because of the JSR-133 memory model specification which asserts that the initialization of an objects final members will occur before any thread can have visibility to those members, period! If they are not final then they are "mutable" and there is no guarantee of proper initialization.
Actors are the perfect place to put mutable state. Objects that represent data should be immutable. Take the following example.
object MyActor extends Actor {
var acc: List[Int] = Nil
def act() {
loop {
react {
case i: Int => acc ::= i
case "what is your current value" => reply( acc )
case _ => // ignore all other messages
}
}
}
}
In this case we can send the value of acc ( which is a List ) and not worry about synchronization because List is immutable aka all of the members of the List object are final. Also because of the immutability we know that no other actor can change the underlying data structure that was sent and thus no other actor can change the mutable state of this actor.
Since Apocalisp has already mentioned the stuff I was going to quote him on, I'll discuss the code. You say it is just multiplying stuff, but I don't see that -- it makes reference to at least three important methods defined outside: order, states and M.log. I can infer that order is an Int, and that states return a function that takes a List[T] and a T and returns Double.
There's also some weird stuff going on...
def logLikelihood(seq: Iterator[T]): Double = {
val sequence = seq.toList
sequence is never used except to define seqPos, so why do that?
val stateSequence = (0 to order).toList.padTo(sequence.length,order)
val seqPos = sequence.zipWithIndex
def probOfSymbAtPos(symb: T, pos: Int) : Double = {
val state = states(stateSequence(pos))
M.log(state( seqPos.map( _._1 ).slice(0, pos).takeRight(order), symb))
Actually, you could use sequence here instead of seqPos.map( _._1 ), since all that does is undo the zipWithIndex. Also, slice(0, pos) is just take(pos).
}
val probs = seqPos.map( i => probOfSymbAtPos(i._1,i._2) )
probs.sum
}
Now, given the missing methods, it is difficult to assert how this should really be written in functional style. Keeping the mystery methods would yield:
def logLikelihood(seq: Iterator[T]): Double = {
import scala.collection.immutable.Queue
case class State(index: Int, order: Int, slice: Queue[T], result: Double)
seq.foldLeft(State(0, 0, Queue.empty, 0.0)) {
case (State(index, ord, slice, result), symb) =>
val state = states(order)
val partial = M.log(state(slice, symb))
val newSlice = slice enqueue symb
State(index + 1,
if (ord == order) ord else ord + 1,
if (queue.size > order) newSlice.dequeue._2 else newSlice,
result + partial)
}.result
}
Only I suspect the state/M.log stuff could be made part of State as well. I notice other optimizations now that I have written it like this. The sliding window you are using reminds me, of course, of sliding:
seq.sliding(order).zipWithIndex.map {
case (slice, index) => M.log(states(index + order)(slice.init, slice.last))
}.sum
That will only start at the orderth element, so some adaptation would be in order. Not too difficult, though. So let's rewrite it again:
def logLikelihood(seq: Iterator[T]): Double = {
val sequence = seq.toList
val slices = (1 until order).map(sequence take) ::: sequence.sliding(order)
slices.zipWithIndex.map {
case (slice, index) => M.log(states(index)(slice.init, slice.last))
}.sum
}
I wish I could see M.log and states... I bet I could turn that map into a foldLeft and do away with these two methods. And I suspect the method returned by states could take the whole slice instead of two parameters.
Still... not bad, is it?
Is there a way in scala to get the arguments back from a already partially applied function?
Does this even make sense, should be done, or fits into any use case?
example:
def doStuff(lower:Int,upper:Int,b:String)=
for(turn <- lower to upper) println(turn +": "+b)
Imagine that at one point I know the 'lower' argument and I get a function of applying it to 'doStuff'
val lowerDoStuff = doStuff(3,_:Int,_:String)
Is there a way for me to get that 3 back ?
(for the sake of example, imagine that I am inside a function which only received 'lowerDoStuff' and now needs to know the first argument)
Idiomatic scala is prefered to introspection/reflection (if possible).
Idiomatic Scala: no, you can't. You have specifically said that the first argument is no longer relevant. If the compiler can make it disappear entirely, that's best: you say you have a function that depends on an int and a string, and you haven't made any promises about what generated it. If you really need that value, but you also really need to pass a 2-argument function, you can do it by hand:
class Function2From3[A,B,C,Z](f: (A,B,C) => Z, val _1: A) extends Function2[B,C,Z] {
def apply(b: B, c: C) = f(_1, b, c)
}
val lowerDoStuff = new Function2From3(doStuff _, 3)
Now when you get the function later on, you can pattern match to see if it's a Function2From3, and then read the value:
val f: Function2[Int,String,Unit] = lowerDoStuff
f match {
case g: Function2From3[_,_,_,_] => println("I know there's a "+g._1+" in there!")
case _ => println("It's all Greek to me.")
}
(if it's important to you that it be an integer, you can remove A as a generic parameter and make _1 be an integer--and maybe just call it lower while you're at it).
Reflection: no, you can't (not in general). The compiler's smarter than that. The generated bytecode (if we wrap your code in class FuncApp) is:
public final void apply(int, java.lang.String);
Signature: (ILjava/lang/String;)V
Code:
0: aload_0
1: getfield #18; //Field $outer:LFuncApp;
4: iconst_3
5: iload_1
6: aload_2
7: invokevirtual #24; //Method FuncApp.doStuff:(IILjava/lang/String;)V
10: return
Notice the iconst_3? That's where your 3 went--it disappeared into the bytecode. There's not even a hidden private field containing the value any more.
As I read, Scala immutable val doesn't get translated to Java final for various reasons. Does this mean that accessing a val from an other Thread must be guarded with synchronization in order to guarantee visibility?
the assignment to val itself is fine from a multi-threading point of view, because you have to assign val a value when you declare it and that value can't be changed in the future (so if you do a val s="hello", s is "hello" from its birth on: no thread will ever read another value).
There are a couple of caveats, however:
1 - if you assign an instance of a mutable class to val, val by itself will not "protect" the internal state of the class from changing.
class Foo(s:String) { var thisIsMutable=s }
// you can then do this
val x = new Foo("hello")
x.thisIsMutable="goodbye"
// note that val guarantees that x is still the same instance of Foo
// reassigning x = new Foo("goodbye") would be illegal
2 - you (or one of your libraries...) can change a val via reflection. If this happens two threads could indeed read a different value for your val
import java.lang.reflect.Field
class Foo { val foo=true } // foo is immutable
object test {
def main(args: Array[String]) {
val f = new Foo
println("foo is " + f.foo) // "foo is true"
val fld = f.getClass.getDeclaredField("foo")
fld.setAccessible(true)
fld.setBoolean(f, false)
println("foo is " + f.foo) // "foo is false"
}
}
As object members, once initialized, vals never change their values during the lifetime of the object. As such, their values are guaranteed to be visible to all threads provided that the reference to the object didn't escape in the constructor. And, in fact, they get Java final modifiers as illustrated below:
object Obj {
val r = 1
def foo {
val a = 1
def bar = a
bar
}
}
Using javap:
...
private final int r;
...
public void foo();
...
0: iconst_1
1: istore_1
2: aload_0
3: iload_1
4: invokespecial #31; //Method bar$1:(I)I
7: pop
...
private final int bar$1(int);
...
0: iload_1
1: ireturn
...
As method locals, they are used only within the method, or they're being passed to a nested method or a closure as arguments (see lifted bar$1 above). A closure might be passed on to another thread, but it will only have a final field with the value of the local val. Therefore, they are visible from the point where they are created to all other threads and synchronization is not necessary.
Note that this says nothing about the object the val points to - it itself may be mutable and warrant synchronization.
In most cases the above cannot be violated via reflection - the Scala val member declaration actually generates a getter with the same name and a private field which the getter accesses. Trying to use reflection to modify the field will result in the NoSuchFieldException. The only way you could modify it is to add a specialized annotation to your class which will make the specialized fields protected, hence accessible to reflection. I cannot currently think of any other situation that could change something declared as val...