Function Overloading Mechanism - scala

class X
class Y extends X
class Z extends Y
class M {
def f(x: X): String = "f with X at M"
def f(x: Y): String = "f with Y at M"
}
class N extends M {
override def f(x: Y): String = "f with Y at N"
def f(x: Z): String = "f with Z at N"
}
val z: Z = new Z
val y: Y = z
val x: X = y
val m: M = new N
println(m.f(x))
// m dynamically matches as type N and sees x as type X thus goes into class M where it calls "f with X at M"
println(m.f(y))
// m dynamically matches as type N and sees y as type Y where it calls "f with Y at N"
println(m.f(z))
// m dynamically matches as type N and sees z as type Z where it calls "f with Z at N"
Consider this code, I don't understand with the final call println(m.f(z)) doesn't behave as I wrote in the comments - is there a good resource for understanding how overloading works in Scala?
Thank!

Firstly overloading in Scala works the same as in Java.
Secondly, it's about static and dynamic binding. Let's find out what compiler see. You have m: M object. Class M has f(X) and f(Y) methods. When you call m.f(z) compiler resolves that method f(Y) should be called because Z is subclass of Y. It's a very important point: compiler doesn't know real class of m object that's why it knows nothing about method N.f(Z). And it's called static binding: compiler resolves method's signature. Later, in runtime, dynamic binding happens. JVM knows real class of m and it calls f(Y) which is overloaded in Z.
Hope my explanations are clearly enough to understand.

class x
class Y extends X
class Z extends Y
class M {
def f(x: X): String = "f with X at M"
def f(x: Y): String = "f with Y at M"
}
class N extends M {
override def f(x: Y): String = "f with Y at N"
def f(x: Z): String = "f with Z at N"
}
val z: Z = new Z
val y: Y = z
val x: X = y
val m: M = new N
println(m.f(x))
// m dynamically matches as type N and sees x as type X thus goes into class M where it calls "f with X at M"
println(m.f(y))
// m dynamically matches as type N and sees y as type Y where it calls "f with Y at N"
println(m.f(z))
// m dynamically matches as type N and sees z as type Z where it calls "f with Z at N"
Because the function will be overloaded on N.so N is depends on m.f(y) .finally it is related with x and y that is reason z function will call

When you do this
val m: M = new N
It means that m is capable of doing everything that class M can. M has two methods - first which can take X, other Y.
And hence when you do this
m.f(z)
Runtime is going to search for a method which can accept z (of type Z). The method in N is not a candidate here because of two reasons
The reference is of type M
Your N does not override any method of M which can accept an argument of type Z. You do have a method in N which can accept a Z but that's not a candidate because it's not overriding anything from M
The best match is f in M which can accept a Y this is because Z ISA Y
You can get what your last comment says if
You define a method in M which takes argument of type Z and then you override it in N
You instantiate a val of type N e.g: val m : N = new N
I think the existing questions on SO already elaborate this point.

Related

I have problem implementing residuals function for leastsq optimization of scipy when importing it from another file

I have written a code in which functions are called in each other. The working code is as follows:
import numpy as np
from scipy.optimize import leastsq
import RF
func = RF.roots
# residuals = RF.residuals
def residuals(params, x, y):
return y - func(params, x)
def estimation(x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(x, y)
print(FIT_params)
where RF file is:
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, x, y):
return y - func(params, x)
I would like to remove residuals function from the main code and use it by calling from RF file instead i.e. by activating the code line residuals = RF.residuals. By doing so, error NameError: name 'func' is not defined will be appeared. I put func argument in RF's residuals function as def residuals(func, params, x, y): which will face to error TypeError: residuals() missing 1 required positional argument: 'y'; It seems the error is related to the forth argument of the residuals function in this sample because it will get error for 'func' if the func argument be placed after the y argument. I couldn't find out the source of the issue, but I guess it must be related to limitation of arguments in functions. I would be appreciated if anyone could guide me to understand the error and its solution.
Is it possible to bring residual function from the main code to the RF file? How?
The problem is that there's no global variable func in your file RF.py, hence it can't be found. A simple solution would be to add an additional parameter to your residuals function:
# RF.py
def roots(params, x):
a, b, c, d = params
y = a * (b * x) ** c + d
return y
def residuals(params, func, x, y):
return y - func(params, x)
Then, you can use it inside your other file like this:
import numpy as np
from scipy.optimize import leastsq
from RF import residuals, roots as func
def estimation(func, x, y):
p_guess = [1, 2, 0.5, 0]
params, cov, infodict, mesg, ier = leastsq(residuals, p_guess, args=(func, x, y), full_output=True)
return params
x = np.array([2.78e-03, 3.09e-03, 3.25e-03, 3.38e-03, 3.74e-03, 4.42e-03, 4.45e-03, 4.75e-03, 8.05e-03, 1.03e-02, 1.30e-02])
y = np.array([2.16e+02, 2.50e+02, 3.60e+02, 4.48e+02, 5.60e+02, 8.64e+02, 9.00e+02, 1.00e+03, 2.00e+03, 3.00e+03, 4.00e+03])
FIT_params = estimation(func, x, y)
print(FIT_params)

What is the flow of execution in the following functions?

I dont undertand after of call y(k), in the function "y" what execute first? the parameters or the body
the function. How the number 5 arrive at the function k
def k(x:Int) = x*x
def y(h:Int => Int) = h(5)
y(k)
OUTPUT:
25
So the beauty of Functional Programming is that we can reason about or programs as expressions.
Given:
def k(x: Int) = x * x // 1.
def y(h: Int => Int) = h(5) // 2.
Then:
y(k) = k(5) // By definition of y (2).
y(k) = 5 * 5 // By definition of k (1).
y(k) = 25 // By definition of multiplication.
Here I made some simplifications. like I didn't do type checking, but that should be pretty straight forward.

Add a vector to every column of a matrix, using Scala Breeze

I have a matrix M of (L x N) rank and I want to add the same vector v of length L to every column of the matrix. Is there a way do this please, using Scala Breeze?
I tried:
val H = DenseMatrix.zeros(L,N)
for (j <- 0 to L) {
H (::,j) = M(::,j) + v
}
but this doesn't really fit Scala's immutability as H is then already defined and therefore gives a reassignment to val error. Any suggestions appreciated!
To add a vector to all columns of a matrix, you don't need to loop through columns; you can use the column broadcasting feature, for your example:
H(::,*) + v // assume v is breeze dense vector
Should work.
import breeze.linalg._
val L = 3
val N = 2
val v = DenseVector(1.0,2.0,3.0)
val H = DenseMatrix.zeros[Double](L, N)
val result = H(::,*) + v
//result: breeze.linalg.DenseMatrix[Double] = 1.0 1.0
// 2.0 2.0
// 3.0 3.0

Method inheritance on contravariant type

I have defined two typeclasses:
trait WeakOrder[-X] { self =>
def cmp(x: X, y: X): Int
def max[Y <: X](x: Y, y: Y): Y = if (cmp(x, y) >= 0) x else y
def min[Y <: X](x: Y, y: Y): Y = if (cmp(x, y) <= 0) x else y
}
trait Lattice[X] { self =>
def sup(x: X, y: X): X
def inf(x: X, y: X): X
}
I would like to do the following:
trait TotalOrder[-X] extends Lattice[X] with WeakOrder[X] { self =>
def sup(x: X, y: X): X = max(x, y)
def inf(x: X, y: X): X = min(x, y)
}
But this is impossible because contravariant type X appears at a covariant position (the returning value of sup and inf).
However, semantically this is correct: max and min with the type signature max[Y <: X](x: Y, y: Y): Y encodes the fact that the returning value of max / min must be one of the two arguments.
I tried to do the following:
trait TotalOrder[-X] extends Lattice[X] with WeakOrder[X] { self =>
def sup[Y <: X](x: Y, y: Y): Y = max(x, y)
def inf[Y <: X](x: Y, y: Y): Y = min(x, y)
}
However, the method def sup[Y <: X](x: Y, y: Y): Y cannot inherit def sup[X](x: X, y: X): X. The compiler complains that the type signature does not match. But the former one (with the on-site variance annotation) imposes a stronger type restrictions than the latter signature. Why the former one cannot inherit the latter one? How can I bypass the contravariant type restrictions on TotalOrder[-X] (semantically, a total order is contravariant)?
This is not semantically correct. It should be clear from the definition of covariant and contravariant, but I'll try to give an example:
Suppose we have hierarchy of entities:
class Shape(s:Float)
class Circle(r:Float) extends Shape(Math.PI.toFloat * r * r)
And let's assume that it's possible to create contravariant orders, as you tried:
trait CircleOrder extends TotalOrder[Circle] {
// compare by r
}
trait ShapeOrder extends TotalOrder[Shape] {
// compare by s
}
By definition of contravariants, as Shape <: Circle,
CircleOrder <: ShapeOrder
(CircleOrder is supertype of ShapeOrder)
Suppose we have client that takes CircleOrder as the argument
and uses it to compare circles:
def clientMethod(circleOrder:TotalOrder[Circle]) = {
val maxCircle = circleOrder.max(???, ???) // expected to return Circle
maxCircle.r // accessing field that is present only in circle
}
Then, by definition of inheritance, it should be possible to pass
ShapeOrder instead of CircleOrder (remember, ShapeOrder is subtype):
clientMethod(new ShapeOrder {/*...*/})
Obviously it will not work, as client still expects order to return Circles, not Shapes.
I think in your case the most reasonable approach will use regular generics.
Update
This is how you can ensure type safety, but it's a bit ugly.
trait WeakOrder[-X] {
def cmp(x: X, y: X): Int
def max[T](x: X with T, y: X with T): T =
if (cmp(x, y) >= 0) x else y
def min[T](x: X with T, y: X with T): T =
if (cmp(x, y) <= 0) x else y
}
trait Lattice[X] {
def sup[T](x: X with T, y: X with T): T
def inf[T](x: X with T, y: X with T): T
}
trait TotalOrder[-X] extends Lattice[X] with WeakOrder[X] {
def sup[T](x: X with T, y: X with T): T = max(x, y)
def inf[T](x: X with T, y: X with T): T = min(x, y)
}

Python 3 Operator Overloading

I'm trying to define the operator type add when it comes to my class Point. Point is exactly what it seems, (x, y). I can't seem to get the operator to work though because the code keeps printing the <main.Point...>. I'm pretty new to this stuff, so can someone explain what I am doing wrong? Thanks. Here is my code:
class Point:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __add__(self, other):
return Point(self.x + other.x, self.y + other.y)
p1 = Point(3,4)
p2 = Point(5,6)
p3 = p1 + p2
print(p3)
Your add function is working as intended. It's your print that's the problem. You're getting an ugly result like <__main__.Point object at 0x027FA5B0> because you haven't told the class how you want it to display itself. Implement __str__ or __repr__ so that it shows a nice string.
class Point:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __add__(self, other):
return Point(self.x + other.x, self.y + other.y)
def __repr__(self):
return "Point({}, {})".format(self.x, self.y)
p1 = Point(3,4)
p2 = Point(5,6)
p3 = p1 + p2
print(p3)
Result:
Point(8, 10)