Write this Scala Matrix multiplication in Haskell [duplicate] - scala

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Can you overload + in haskell?
Can you implement a Matrix class and an * operator that will work on two matrices?:
scala> val x = Matrix(3, 1,2,3,4,5,6)
x: Matrix =
[1.0, 2.0, 3.0]
[4.0, 5.0, 6.0]
scala> x*x.transpose
res0: Matrix =
[14.0, 32.0]
[32.0, 77.0]
and just so people don't say that it's hard, here is the Scala implementation (courtesy of Jonathan Merritt):
class Matrix(els: List[List[Double]]) {
/** elements of the matrix, stored as a list of
its rows */
val elements: List[List[Double]] = els
def nRows: Int = elements.length
def nCols: Int = if (elements.isEmpty) 0
else elements.head.length
/** all rows of the matrix must have the same
number of columns */
require(elements.forall(_.length == nCols))
/* Add to each elem of matrix */
private def addRows(a: List[Double],
b: List[Double]):
List[Double] =
List.map2(a,b)(_+_)
private def subRows(a: List[Double],
b: List[Double]):List[Double] =
List.map2(a,b)(_-_)
def +(other: Matrix): Matrix = {
require((other.nRows == nRows) &&
(other.nCols == nCols))
new Matrix(
List.map2(elements, other.elements)
(addRows(_,_))
)
}
def -(other: Matrix): Matrix = {
require((other.nRows == nRows) &&
(other.nCols == nCols))
new Matrix(
List.map2(elements, other.elements)
(subRows(_,_))
)
}
def transpose(): Matrix = new Matrix(List.transpose(elements))
private def dotVectors(a: List[Double],
b: List[Double]): Double = {
val multipliedElements =
List.map2(a,b)(_*_)
(0.0 /: multipliedElements)(_+_)
}
def *(other: Matrix): Matrix = {
require(nCols == other.nRows)
val t = other.transpose()
new Matrix(
for (row <- elements) yield {
for (otherCol <- t.elements)
yield dotVectors(row, otherCol)
}
)
override def toString(): String = {
val rowStrings =
for (row <- elements)
yield row.mkString("[", ", ", "]")
rowStrings.mkString("", "\n", "\n")
}
}
/* Matrix constructor from a bunch of numbers */
object Matrix {
def apply(nCols: Int, els: Double*):Matrix = {
def splitRowsWorker(
inList: List[Double],
working: List[List[Double]]):
List[List[Double]] =
if (inList.isEmpty)
working
else {
val (a, b) = inList.splitAt(nCols)
splitRowsWorker(b, working + a)
}
def splitRows(inList: List[Double]) =
splitRowsWorker(inList, List[List[Double]]())
val rows: List[List[Double]] =
splitRows(els.toList)
new Matrix(rows)
}
}
EDIT I understood that strictly speaking the answer is No: overloading * is not possible without side-effects of defining also a + and others or special tricks. The numeric-prelude package describes it best:
In some cases, the hierarchy is not finely-grained enough: Operations
that are often defined independently are lumped together. For
instance, in a financial application one might want a type "Dollar",
or in a graphics application one might want a type "Vector". It is
reasonable to add two Vectors or Dollars, but not, in general,
reasonable to multiply them. But the programmer is currently forced to
define a method for '(*)' when she defines a method for '(+)'.

It'll be perfectly safe with a smart constructor and stored dimensions. Of course there are no natural implementations for the operations signum and fromIntegral (or maybe a diagonal matrix would be fine for the latter).
module Matrix (Matrix(),matrix,matrixTranspose) where
import Data.List (transpose)
data Matrix a = Matrix {matrixN :: Int,
matrixM :: Int,
matrixElems :: [[a]]}
deriving (Show, Eq)
matrix :: Int -> Int -> [[a]] -> Matrix a
matrix n m vals
| length vals /= m = error "Wrong number of rows"
| any (/=n) $ map length vals = error "Column length mismatch"
| otherwise = Matrix n m vals
matrixTranspose (Matrix m n vals) = matrix n m (transpose vals)
instance Num a => Num (Matrix a) where
(+) (Matrix m n vals) (Matrix m' n' vals')
| m/=m' = error "Row number mismatch"
| n/=n' = error "Column number mismatch"
| otherwise = Matrix m n (zipWith (zipWith (+)) vals vals')
abs (Matrix m n vals) = Matrix m n (map (map abs) vals)
negate (Matrix m n vals) = Matrix m n (map (map negate) vals)
(*) (Matrix m n vals) (Matrix n' p vals')
| n/=n' = error "Matrix dimension mismatch in multiplication"
| otherwise = let tvals' = transpose vals'
dot x y = sum $ zipWith (*) x y
result = map (\col -> map (dot col) tvals') vals
in Matrix m p result
Test it in ghci:
*Matrix> let a = matrix 3 2 [[1,0,2],[-1,3,1]]
*Matrix> let b = matrix 2 3 [[3,1],[2,1],[1,0]]
*Matrix> a*b
Matrix {matrixN = 3, matrixM = 3, matrixElems = [[5,1],[4,2]]}
Since my Num instance is generic, it even works for complex matrices out of the box:
Prelude Data.Complex Matrix> let c = matrix 2 2 [[0:+1,1:+0],[5:+2,4:+3]]
Prelude Data.Complex Matrix> let a = matrix 2 2 [[0:+1,1:+0],[5:+2,4:+3]]
Prelude Data.Complex Matrix> let b = matrix 2 3 [[3:+0,1],[2,1],[1,0]]
Prelude Data.Complex Matrix> a
Matrix {matrixN = 2, matrixM = 2, matrixElems = [[0.0 :+ 1.0,1.0 :+ 0.0],[5.0 :+ 2.0,4.0 :+ 3.0]]}
Prelude Data.Complex Matrix> b
Matrix {matrixN = 2, matrixM = 3, matrixElems = [[3.0 :+ 0.0,1.0 :+ 0.0],[2.0 :+ 0.0,1.0 :+ 0.0],[1.0 :+ 0.0,0.0 :+ 0.0]]}
Prelude Data.Complex Matrix> a*b
Matrix {matrixN = 2, matrixM = 3, matrixElems = [[2.0 :+ 3.0,1.0 :+ 1.0],[23.0 :+ 12.0,9.0 :+ 5.0]]}
EDIT: new material
Oh, you want to just override the (*) function without any Num stuff. That's possible to o but you'll have to remember that the Haskell standard library has reserved (*) for use in the Num class.
module Matrix where
import qualified Prelude as P
import Prelude hiding ((*))
import Data.List (transpose)
class Multiply a where
(*) :: a -> a -> a
data Matrix a = Matrix {matrixN :: Int,
matrixM :: Int,
matrixElems :: [[a]]}
deriving (Show, Eq)
matrix :: Int -> Int -> [[a]] -> Matrix a
matrix n m vals
| length vals /= m = error "Wrong number of rows"
| any (/=n) $ map length vals = error "Column length mismatch"
| otherwise = Matrix n m vals
matrixTranspose (Matrix m n vals) = matrix n m (transpose vals)
instance P.Num a => Multiply (Matrix a) where
(*) (Matrix m n vals) (Matrix n' p vals')
| n/=n' = error "Matrix dimension mismatch in multiplication"
| otherwise = let tvals' = transpose vals'
dot x y = sum $ zipWith (P.*) x y
result = map (\col -> map (dot col) tvals') vals
in Matrix m p result
a = matrix 3 2 [[1,2,3],[4,5,6]]
b = a * matrixTranspose
Testing in ghci:
*Matrix> b
Matrix {matrixN = 3, matrixM = 3, matrixElems = [[14,32],[32,77]]}
There. Now if a third module wants to use both the Matrix version of (*) and the Prelude version of (*) it'll have to of course import one or the other qualified. But that's just business as usual.
I could've done all of this without the Multiply type class but this implementation leaves our new shiny (*) open for extension in other modules.

Alright, there's a lot of confusion about what's happening here floating around, and it's not being helped by the fact that the Haskell term "class" does not line up with the OO term "class" in any meaningful way. So let's try to make a careful answer. This answer starts with Haskell's module system.
In Haskell, when you import a module Foo.Bar, it creates a new set of bindings. For each variable x exported by the module Foo.Bar, you get a new name Foo.Bar.x. In addition, you may:
import qualified or not. If you import qualified, nothing more happens. If you do not, an additional name without the module prefix is defined; in this case, just plain old x is defined.
change the qualification prefix or not. If you import as Alias, then the name Foo.Bar.x is not defined, but the name Alias.x is.
hide certain names. If you hide name foo, then neither the plain name foo nor any qualified name (like Foo.Bar.foo or Alias.foo) is defined.
Furthermore, names may be multiply defined. For example, if Foo.Bar and Baz.Quux both export the variable x, and I import both modules without qualification, then the name x refers to both Foo.Bar.x and Baz.Quux.x. If the name x is never used in the resulting module, this clash is ignored; otherwise, a compiler error asks you to provide more qualification.
Finally, if none of your imports mention the module Prelude, the following implicit import is added:
import Prelude
This imports the Prelude without qualification, with no additional prefix, and without hiding any names. So it defines "bare" names and names prefixed by Prelude., and nothing more.
Here ends the bare basics you need to understand about the module system. Now let's discuss the bare basics you need to understand about typeclasses.
A typeclass includes a class name, a list of type variables bound by that class, and a collection of variables with type signatures that refer to the bound variables. Here's an example:
class Foo a where
foo :: a -> a -> Int
The class name is Foo, the bound type variable is a, and there is only one variable in the collection, namely foo, with type signature a -> a -> Int. This class declares that some types have a binary operation, named foo, which computes an Int. Any type may later (even in another module) be declared to be an instance of this class: this involves defining the binary operation above, where the bound type variable a is substituted with the type you are creating an instance for. As an example, we might implement this for integers by the instance:
instance Foo Int where
foo a b = (a `mod` 76) * (b + 7)
Here ends the bare basics you need to understand about typeclasses. We may now answer your question. The only reason the question is tricky is because it falls smack dab on the intersection between two name management techniques: modules and typeclasses. Below I discuss what this means for your specific question.
The module Prelude defines a typeclass named Num, which includes in its collection of variables a variable named *. Therefore, we have several options for the name *:
If the type signature we desire happens to follow the pattern a -> a -> a, for some type a, then we may implement the Num typeclass. We therefore extend the Num class with a new instance; the name Prelude.* and any aliases for this name are extended to work for the new type. For matrices, this would look like, for example,
instance Num Matrix where
m * n = {- implementation goes here -}
We may define a different name than *.
m |*| n = {- implementation goes here -}
We may define the name *. Whether this name is defined as part of a new type class or not is immaterial. If we do nothing else, there will then be at least two definitions of *, namely, the one in the current module and the one implicitly imported from the Prelude. We have a variety of ways of dealing with this. The simplest is to explicitly import the Prelude, and ask for the name * not to be defined:
import Prelude hiding ((*))
You might alternately choose to leave the implicit import of Prelude, and use a qualified * everywhere you use it. Other solutions are also possible.
The main point I want you to take away from this is: the name * is in no way special. It is just a name defined by the Prelude, and all of the tools we have available for namespace control are available.

You can implement * as matrix multiplication by defining an instance of Num class for Matrix. But the code won't be type-safe: * (and other arithmetic operations) on matrices as you define them is not total, because of size mismatch or in case of '/' non-existence of inverse matrices.
As for 'the hierarchy is not defined precisely' - there is also Monoid type class, exactly for the cases when only one operation is defined.
There are too many things to be 'added', sometimes in rather exotic ways (think of permutation groups). Haskell designers designed to reserve arithmetical operations for different representations of numbers, and use other names for more exotic cases.

Related

Achieving compile safe indexing with Shapeless

Problem: Let val v = List(0.5, 1.2, 0.3) model a realisation of some vector v = (v_1, v_2, v_3). The index j in v_j is implied by the element's position in the list. A lot of boilerplate and bugs have been due to tracking these indices, eg, when creating modified lists from the original. (How to make sure (in compile-time) that collection wasn't reordered? seems related.)
General question: What could be a good way to ensure correct indexing at compile time? (I assume that performance is not essential.)
My plan is to use subclasses of Nat in shapeless to model the indices as (explicit) types. The current solution is
import shapeless._
import Nat._
trait Elem[+A, +N<:Nat]{
val v: A
val ind: N}
case class DecElem[N<:Nat](v: BigDecimal, ind: N) extends Elem[BigDecimal, N]
object Decimals {
type One = DecElem[ _0]:: HNil
type Two = DecElem[ _0]:: DecElem[_1] :: HNil
//...
}
case class Scalar(v: Decimals.One)
case class VecTwo(v: Decimals.Two)
This, however, gets tedious in larger dimensions.
Another question is how to approach the generic case in trait Elem[+A, +N<:Nat]. As a start, I defined case class ElemVector[A, M<:Nat](vs: Sized[List[Elem[A, Nat]], M]), which loses the specific index type in Elem. What might be a strategy to circumvent this difficulty?
(Note: A better design may be to wrap List[A] by attaching explicit indices and deal with wrapper class. This, however, does not essentially change the question.)
UPDATE Here's is a trivial illustration of accidental swapping of vector elements.
import shapeless.Nat._
import shapeless.{Sized, nat}
type VectorTwo = Sized[IndexedSeq[Double], nat._2]
val f = (p: VectorTwo, x: Double) => {
val V = p(_0)
val K = p(_1)
V * x/(K + x)
}
val V : Double = 500
val K : Double = 1
val correct: VectorTwo = Sized(V, K)
val wrong: VectorTwo = Sized(K, V) //Compiles!
f(correct, 10) // = 454.54
f(wrong, 10) // = 0.02
I'd like to enlist the compiler to prevent such errors in vectors with many elements, and wonder if there could be an elegant solution with Shapeless.

Can I condense this expression by using some `map` or `apply`?

I have this code which I'd like to condense
runFn someFn $ toArray a1 $ toArray a2 $ toArray a3 $ toArray a4
I would envision something like
runFn someFn <$> fmap toArray [a1, a2, a3, a4]
In that case runFn someFn would create a partially applied function that awaits its missing parameters and gets then applied one by one on the elements of the array.
I must admit I dont know if the type system will allow for this.
Edit
As it was asked for - here the actual type signatures for this.
However my question is a more general one:
Can I put the parameters of an function into an array if those parameters are of the same type and the partially apply the function array element by array element.
type Numbers = Array Number
foreign import _intersect :: Numbers -> Numbers -> Numbers -> Numbers -> Point2D
data Point2D = Point2D Number Number
toArray :: Point2D -> Numbers
toArray (Point2D x y) = [x, y]
a1 = Point2D 0.5 7.0
a2 = Point2D 3.0 5.1
b1 = Point2D 2.5 9.0
b2 = Point2D 2.1 3.6
intersection :: Numbers -- 2 element Array Number
intersection = _intersect (toArray a1) (toArray a2) (toArray b1) (toArray b2)
If I understand what you're asking here, no you can't. You'd need dependant types to be able to express this, as you'd need some way of ensuring the function arity and array length are equal at the type level.
If we are allowed to change the question slightly, we might be able to achieve what I think you want. The fact that all functions are curried allows us to use type class instance resolution to come up with some tricks that sort of allow us to do things with functions of arbitrary arity. For example:
module Main where
import Prelude
import Data.Foldable (sum)
import Control.Monad.Eff.Console (print)
class Convert a b where
convert :: a -> b
data Point2D = Point2D Number Number
type Numbers = Array Number
instance convertId :: Convert a a where
convert = id
instance convertPoint :: Convert Point2D (Array Number) where
convert (Point2D x y) = [x, y]
instance convertChain :: (Convert b a, Convert r r') => Convert (a -> r) (b -> r') where
convert a2r b = convert (a2r (convert b))
f :: Numbers -> Numbers -> Number
f xs ys = sum xs * sum ys
g :: Point2D -> Point2D -> Number
g = convert f
f' :: Numbers -> Numbers -> Numbers -> Numbers -> Number
f' a b c d = f a b + f c d
g' :: Point2D -> Point2D -> Point2D -> Point2D -> Number
g' = convert f'
main =
print $ g'
(Point2D 1.0 2.0)
(Point2D 3.0 4.0)
(Point2D 5.0 6.0)
(Point2D 7.0 8.0)
These techniques have some nice applications. QuickCheck, for example, uses a similar technique to allow you to call quickCheck on functions of any arity, as long as all the arguments have Arbitrary instances. In this specific case, though, I think I would stick to the simpler, more boilerplate-y solution: unrestrained use of type classes can become quite unwieldy, and can produce very confusing error messages.

Is there a way to derive Num class functions in own data type in Haskell?

Let's say I have a type declaration:
data MyType = N Double | C Char | Placeholder
I want to be able to treat MyType as a Double whenever it's possible, with all the Num, Real, Fractional functions resulting in N (normal result) for arguments wrapped in the N constructor, and Placeholder for other arguments
> (N 5.0) + (N 6.0)
N 11.0
> (N 5.0) + (C 'a')
Placeholder
Is there a way to do this other than simply defining this class as an instance of those classes in a manner similar to:
instance Num MyType where
(+) (N d1) (N d2) = N (d1+d2)
(+) _ _ = Placeholder
...
(which seems counter-productive)?
There is no generic deriving available in standard Haskell: currently, deriving is only available as defined by the compiler for specific Prelude typeclasses: Read, Show, Eq, Ord, Enum, and Bounded.
The Glasgow Haskell Compiler (GHC) apparently has extensions that support generic deriving. However, I don't know if it would actually save you any work to try and use them: how many typeclasses do you need to derive a Num instance from? And, are you sure that you can define an automatic scheme for deriving Num that will always do what you want?
As noted in the comments, you need to describe what your Num instance will do in any case. And describing and debugging a general scheme is certain to be more work than describing a particular one.
No, you can't do this automatically, but I think what leftaroundabout could have been getting at is that you can use Applicative operations to help you.
data MyType n = N n | C Char | Placeholder deriving (Show, Eq, Functor)
instance Applicative MyType where
pure = N
(<*>) = ap
instance Monad MyType where
N n >>= f = f n
C c >>= _ = C c
Placeholder >>= _ = Placeholder
Now you can write
instance Num n => Num (MyType n) where
x + y = (+) <$> x <*> y
abs = fmap abs
...

For comprehension and number of function creation

Recently I had an interview for Scala Developer position. I was asked such question
// matrix 100x100 (content unimportant)
val matrix = Seq.tabulate(100, 100) { case (x, y) => x + y }
// A
for {
row <- matrix
elem <- row
} print(elem)
// B
val func = print _
for {
row <- matrix
elem <- row
} func(elem)
and the question was: Which implementation, A or B, is more efficent?
We all know that for comprehensions can be translated to
// A
matrix.foreach(row => row.foreach(elem => print(elem)))
// B
matrix.foreach(row => row.foreach(func))
B can be written as matrix.foreach(row => row.foreach(print _))
Supposedly correct answer is B, because A will create function print 100 times more.
I have checked Language Specification but still fail to understand the answer. Can somebody explain this to me?
In short:
Example A is faster in theory, in practice you shouldn't be able to measure any difference though.
Long answer:
As you already found out
for {xs <- xxs; x <- xs} f(x)
is translated to
xxs.foreach(xs => xs.foreach(x => f(x)))
This is explained in §6.19 SLS:
A for loop
for ( p <- e; p' <- e' ... ) e''
where ... is a (possibly empty) sequence of generators, definitions, or guards, is translated to
e .foreach { case p => for ( p' <- e' ... ) e'' }
Now when one writes a function literal, one gets a new instance every time the function needs to be called (§6.23 SLS). This means that
xs.foreach(x => f(x))
is equivalent to
xs.foreach(new scala.Function1 { def apply(x: T) = f(x)})
When you introduce a local function type
val g = f _; xxs.foreach(xs => xs.foreach(x => g(x)))
you are not introducing an optimization because you still pass a function literal to foreach. In fact the code is slower because the inner foreach is translated to
xs.foreach(new scala.Function1 { def apply(x: T) = g.apply(x) })
where an additional call to the apply method of g happens. Though, you can optimize when you write
val g = f _; xxs.foreach(xs => xs.foreach(g))
because the inner foreach now is translated to
xs.foreach(g())
which means that the function g itself is passed to foreach.
This would mean that B is faster in theory, because no anonymous function needs to be created each time the body of the for comprehension is executed. However, the optimization mentioned above (that the function is directly passed to foreach) is not applied on for comprehensions, because as the spec says the translation includes the creation of function literals, therefore there are always unnecessary function objects created (here I must say that the compiler could optimize that as well, but it doesn't because optimization of for comprehensions is difficult and does still not happen in 2.11). All in all it means that A is more efficient but B would be more efficient if it is written without a for comprehension (and no function literal is created for the innermost function).
Nevertheless, all of these rules can only be applied in theory, because in practice there is the backend of scalac and the JVM itself which both can do optimizations - not to mention optimizations that are done by the CPU. Furthermore your example contains a syscall that is executed on every iteration - it is probably the most expensive operation here that outweighs everything else.
I'd agree with sschaef and say that A is the more efficient option.
Looking at the generated class files we get the following anonymous functions and their apply methods:
MethodA:
anonfun$2 -- row => row.foreach(new anonfun$2$$anonfun$1)
anonfun$2$$anonfun$1 -- elem => print(elem)
i.e. matrix.foreach(row => row.foreach(elem => print(elem)))
MethodB:
anonfun$3 -- x => print(x)
anonfun$4 -- row => row.foreach(new anonfun$4$$anonfun$2)
anonfun$4$$anonfun$2 -- elem => func(elem)
i.e. matrix.foreach(row => row.foreach(elem => func(elem)))
where func is just another indirection before calling to print. In addition func needs to be looked up, i.e. through a method call on an instance (this.func()) for each row.
So for Method B, 1 extra object is created (func) and there are # of elem additional function calls.
The most efficient option would be
matrix.foreach(row => row.foreach(func))
as this has the least number of objects created and does exactly as you would expect.
Benchmark
Summary
Method A is nearly 30% faster than method B.
Link to code: https://gist.github.com/ziggystar/490f693bc39d1396ef8d
Implementation Details
I added method C (two while loops) and D (fold, sum). I also increased the size of the matrix and used an IndexedSeq instead. Also I replaced the print with something less heavy (sum all entries).
Strangely the while construct is not the fastest. But if one uses Array instead of IndexedSeq it becomes the fastest by a large margin (factor 5, no boxing anymore). Using explicitly boxed integers, methods A, B, C are all equally fast. In particular they are faster by 50% compared to the implicitly boxed versions of A, B.
Results
A
4.907797735
4.369745787
4.375195012000001
4.7421321800000005
4.35150636
B
5.955951859000001
5.925475619
5.939570085000001
5.955592247
5.939672226000001
C
5.991946029
5.960122757000001
5.970733164
6.025532582
6.04999499
D
9.278486201
9.265983922
9.228320372
9.255641645
9.22281905
verify results
999000000
999000000
999000000
999000000
>$ scala -version
Scala code runner version 2.11.0 -- Copyright 2002-2013, LAMP/EPFL
Code excerpt
val matrix = IndexedSeq.tabulate(1000, 1000) { case (x, y) => x + y }
def variantA(): Int = {
var r = 0
for {
row <- matrix
elem <- row
}{
r += elem
}
r
}
def variantB(): Int = {
var r = 0
val f = (x:Int) => r += x
for {
row <- matrix
elem <- row
} f(elem)
r
}
def variantC(): Int = {
var r = 0
var i1 = 0
while(i1 < matrix.size){
var i2 = 0
val row = matrix(i1)
while(i2 < row.size){
r += row(i2)
i2 += 1
}
i1 += 1
}
r
}
def variantD(): Int = matrix.foldLeft(0)(_ + _.sum)

How to divide a pair of Num values?

Here is a function that takes a pair of Integral
values and divides them:
divide_v1 :: Integral a => (a, a) -> a
divide_v1 (m, n) = (m + n) `div` 2
I invoke the function with a pair of Integral
values and it works as expected:
divide_v1 (1, 3)
Great. That's perfect if my numbers are always Integrals.
Here is a function that takes a pair of Fractional
values and divides them:
divide_v2 :: Fractional a => (a, a) -> a
divide_v2 (m, n) = (m + n) / 2
I invoke the function with a pair of Fractional
values and it works as expected:
divide_v2 (1.0, 3.0)
Great. That's perfect if my numbers are always Fractionals.
I would like a function that works regardless of whether the
numbers are Integrals or Fractionals:
divide_v3 :: Num a => (a, a) -> a
divide_v3 (m, n) = (m + n) ___ 2
What operator do I use for _?
To expand on what AndrewC said, div doesn't have the same properties that / does. For example, in maths, if a divided by b = c, then c times b == a. When working with types like Double and Float, the operations / and * satisfy this property (to the extent that the accuracy of the type allows). But when using div with Ints, the property doesn't hold true. 5 div 3 = 1, but 1*3 /= 5! So if you want to use the same "divide operation" for a variety of numeric types, you need to think about how you want it to behave. Also, you almost certainly wouldn't want to use the same operator /, because that would be misleading.
If you want your "divide operation" to return the same type as its operands, here's one way to accomplish that:
class Divideable a where
mydiv :: a -> a -> a
instance Divideable Int where
mydiv = div
instance Divideable Double where
mydiv = (/)
In GHCi, it looks like this:
λ> 5 `mydiv` 3 :: Int
1
λ> 5 `mydiv` 3 :: Double
1.6666666666666667
λ> 5.0 `mydiv` 3.0 :: Double
1.6666666666666667
On the other hand, if you want to do "true" division, you would need to convert the integral types like this:
class Divideable2 a where
mydiv2 :: a -> a -> Double
instance Divideable2 Int where
mydiv2 a b = fromIntegral a / fromIntegral b
instance Divideable2 Double where
mydiv2 = (/)
In GHCi, this gives:
λ> 5 `mydiv2` 3
1.6666666666666667
λ> 5.0 `mydiv2` 3.0
1.6666666666666667
I think you are looking for Associated Types which allows for implicit type coercion and are explained quite nicely here. Below is an example for the addition of doubles and integers.
class Add a b where
type SumTy a b
add :: a -> b -> SumTy a b
instance Add Integer Double where
type SumTy Integer Double = Double
add x y = fromIntegral x + y
instance Add Double Integer where
type SumTy Double Integer = Double
add x y = x + fromIntegral y
instance (Num a) => Add a a where
type SumTy a a = a
add x y = x + y