Min-cost flow modification for arcs with fixed cost? - or-tools

I have a min-cost flow network in which some arcs have a fixed charge, that is, if arc k has non-zero flow x_k, then the cost is c_k, independent of the amount of flow. A flow of 0 incurs 0 cost. These arcs do not have capacity constraints.
I know how to model this as a mixed integer program (MIP): Add a 0/1 variable y_k with cost c_k. Set the capacity on arc k to M * y_k, where M is larger than the sum of all supplies. So the fixed cost is incurred if and only if the arc has flow.
Can this be solved using a min-cost flow formulation, which would be more efficient than a general MIP implementation? Does OR-Tools (or any other package) have an extension to min-cost flow that accommodates this?
Cross-posted to the Google OR-Tools list.
Thanks,
Hershel

I'm not sure that I understand you (most likely due to my ignorance). - you will possibly get a better response from the OR-forum than here.
However, I think there may be a way of doing what you ask as a circuit via AddCircuit()
Essentially I believe one can maximise (or minimise) those arcs which are marked as having a cost.
Here is an example using the AddCircuit constraint, where one outgoing arc from each node has a fixed cost.
from ortools.sat.python import cp_model
class DiGraphSolver:
def __init__(self, desc):
self.model = cp_model.CpModel()
self.status = cp_model.UNKNOWN
self.timing = None
# AddCircuit needs a numeric index for each node.
# Here's two lazy key->index / index->key lookups.
self.keys = {k: i for i, k in enumerate(desc.nodes.keys()) }
self.revs = {i: k for k, i in self.keys.items() }
# Determine the start and stop nodes
self.start = self.keys[desc.start]
self.stop = self.keys[desc.stop]
# Store the nodes dict in it's indexed form.
self.nodes = {self.keys[head]: [self.keys[t] for t in tails] for head,tails in desc.nodes.items()}
self.heavies = [(self.keys[head],self.keys[tail]) for head,tail in desc.heavies.items()]
self.arcs = []
self.vars = []
self.result = []
self.heavy_arcs = []
self.weight = 0
def setup(self):
self.arcs = [
(head,tail, self.model.NewBoolVar(f'{head}:{tail}')) for head, tails in self.nodes.items() for tail in tails
]
self.heavy_arcs = [arc[2] for arc in self.arcs if arc[:-1] in self.heavies]
# vars is a list of all the arcs defined in the problem.
self.vars = [arc[2] for arc in self.arcs]
# Add self loops for all *optional* nodes (because AddCircuit requires a Hamiltonian Circuit)
# for this example, that's everywhere except for 'start' and 'stop'
# We just use the keys of self.revs (the index values).
loops = [(n, n, self.model.NewBoolVar(f'{n}:{n}')) for n in self.revs if n not in [self.start, self.stop]]
self.arcs += loops
# connect the stop variable to the start variable as a dummy arc to complete the hamiltonian circuit.
# Because start and stop are not self-closing (non-optional), we don't need to set truth values.
loop = (self.stop, self.start, self.model.NewBoolVar(f'loop'))
self.arcs.append(loop)
# Now add the circuit as a constraint.
self.model.AddCircuit(self.arcs)
# Now reduce weighted nodes.
self.model.Minimize(sum(self.heavy_arcs)) # look for the shortest network with the lightest weight.
def solve(self) -> bool:
cp_solver = cp_model.CpSolver()
cp_solver.parameters.max_time_in_seconds = 1
cp_solver.parameters.num_search_workers = 12
self.status = cp_solver.Solve(self.model)
return self.summarise(cp_solver)
def summarise(self, cp_solver) -> bool:
if self.status in (cp_model.OPTIMAL, cp_model.FEASIBLE):
self.store(cp_solver)
return True
else:
if self.status == cp_model.INFEASIBLE:
print(f"Challenge for {self.step_count} arc{'s ' if self.step_count > 1 else ' '}is infeasible after {cp_solver.WallTime()}s.")
else:
print(f"Solver ran out of time.")
return False
def store(self, cp_solver):
self.timing = cp_solver.WallTime()
used = [arc for arc in self.arcs if cp_solver.Value(arc[2])]
arc = None, self.start
while True:
arc = next((link for link in used if link[0] == arc[1]), None)
self.result.append(self.revs[arc[0]])
if arc[1] == self.start:
break
self.weight = cp_solver.ObjectiveValue()
self.step_count = len(self.result) - 1
def show(self):
print(f"{'-'.join(self.result)}")
print(f'Cost: {self.weight}')
class RandomDigraph:
"""
define a problem.
26 nodes, labelled 'a' ... 'z'
start at 'a', stop at 'z'
Each node other than 'z' has a 4 outgoing arcs (random but not going to 'a')
"""
def __init__(self):
from random import sample,randint #
names = 'abcdefghijklmnopqrstuvwxyz'
arcs = 4
self.steps = 1
self.start = 'a'
self.stop = 'z'
but_first = set(names) ^ set(self.start)
self.nodes = {v: sample(but_first - set(v), arcs) for v in names}
self.heavies = {v: self.nodes[v][randint(0, arcs - 1)] for v in names if v != self.stop}
self.nodes[self.stop] = []
def print_nodes(self):
for key, value in self.nodes.items():
vs = [f" {v} " if v != self.heavies[key] else f"*{v}*" for v in value]
print(f'{key}: {"".join(vs)}')
def solve_with_steps(problem) -> int:
solver = DiGraphSolver(problem)
solver.setup()
if solver.solve():
solver.show()
return solver.step_count
def solve_az_paths_of_a_random_digraph():
problem = RandomDigraph()
problem.print_nodes()
print()
solve_with_steps(problem)
if __name__ == '__main__':
solve_az_paths_of_a_random_digraph()
Example run (solving a..z) gives
# network: (heavy arcs are marked by the tail in **.)
# eg. a->p is a heavy arc.
a: *p* d i l
b: *t* u e y
c: r v *m* q
d: q t *f* l
e: k *o* y i
f: i p z *u*
g: s h i *x*
h: *g* l j d
i: x f e *k*
j: *g* r e p
k: d *c* g q
l: r f j *h*
m: *i* b d r
n: t v y *b*
o: s x q *w*
p: w g *h* n
q: o r *f* p
r: f *c* i m
s: y c w *p*
t: *y* d v i
u: *h* z w n
v: *d* x f t
w: l c *s* r
x: *j* r g m
y: b j *u* c
z:
Solution:
a-i-e-k-g-h-j-p-w-c-q-o-s-y-b-u-n-t-v-x-r-m-d-l-f-z
Cost: 0.0

Related

Matrix Vector multiplication in Scala

I am having a Matrix of size D by D (implemented as List[List[Int]]) and a Vector of size D (implemented as List[Int]). Assuming value of D = 3, I can create matrix and vector in following way.
val Vector = List(1,2,3)
val Matrix = List(List(4,5,6) , List(7,8,9) , List(10,11,12))
I can multiply both these as
(Matrix,Vector).zipped.map((x,y) => (x,Vector).zipped.map(_*_).sum )
This code multiplies matrix with vector and returns me vector as needed. I want to ask is there any faster or optimal way to get the same result using Scala functional style? As in my scenario I have much bigger value of D.
What about something like this?
def vectorDotProduct[N : Numeric](v1: List[N], v2: List[N]): N = {
import Numeric.Implicits._
// You may replace this with a while loop over two iterators if you require even more speed.
#annotation.tailrec
def loop(remaining1: List[N], remaining2: List[N], acc: N): N =
(remaining1, remaining2) match {
case (x :: tail1, y :: tail2) =>
loop(
remaining1 = tail1,
remaining2 = tail2,
acc + (x * y)
)
case (Nil, _) | (_, Nil) =>
acc
}
loop(
remaining1 = v1,
remaining2 = v2,
acc = Numeric[N].zero
)
}
def matrixVectorProduct[N : Numeric](matrix: List[List[N]], vector: List[N]): List[N] =
matrix.map(row => vectorDotProduct(vector, row))

Variable associated to "Optimization terminated successfully" in scipy.optimize.fmin_cg?

I am using scipy.optimize.fmin https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.fmin_cg.html.
What is the variable associated to "Optimization terminated successfully"?
I need it such that I could write something like:
if "optimization not succesful" then "stop the for loop"
Thank you.
Just follow the docs.
You are interested in warnflag (as mentioned by cel in the comments), the 5th element returned, so just index
(0-indexing in python!) the result with result[4] to obtain your value.
The docs also say that some of these are only returned when called with argument full_output=True, so do this.
Simple example:
import numpy as np
args = (2, 3, 7, 8, 9, 10) # parameter values
def f(x, *args):
u, v = x
a, b, c, d, e, f = args
return a*u**2 + b*u*v + c*v**2 + d*u + e*v + f
def gradf(x, *args):
u, v = x
a, b, c, d, e, f = args
gu = 2*a*u + b*v + d # u-component of the gradient
gv = b*u + 2*c*v + e # v-component of the gradient
return np.asarray((gu, gv))
x0 = np.asarray((0, 0)) # Initial guess.
from scipy import optimize
res1 = optimize.fmin_cg(f, x0, fprime=gradf, args=args, full_output=True) # full_output !!!
print(res1[4]) # index 4 !!!

Nim operator overloading

Just started programming in the Nim language (which I really like so far). As a learning exercise I am writing a small matrix library. I have a bunch more code, but I'll just show the part that's relevant to this question.
type
Matrix*[T; nrows, ncols: static[int]] = array[0 .. (nrows * ncols - 1), T]
# Get the index in the flattened array corresponding
# to row r and column c in the matrix
proc index(mat: Matrix, r, c: int): int =
result = r * mat.ncols + c
# Return the element at r, c
proc `[]`(mat: Matrix, r, c: int): Matrix.T =
result = mat[mat.index(r, c)]
# Set the element at r, c
proc `[]=`(mat: var Matrix, r, c: int, val: Matrix.T) =
mat[mat.index(r, c)] = val
# Add a value to every element in the matrix
proc `+=`(mat: var Matrix, val: Matrix.T) =
for i in 0 .. mat.high:
mat[i] += val
# Add a value to element at r, c
proc `[]+=`(mat: var Matrix, r, c: int, val: Matrix.T) =
mat[mat.index(r, c)] += val
# A test case
var mat: Matrix[float, 3, 4] # matrix with 3 rows and 4 columns
mat[1, 3] = 7.0
mat += 1.0
# add 8.0 to entry 1, 3 in matrix
`[]+=`(mat, 1, 3, 8.0) # works fine
All this works fine, but I'd like to be able to replace the last line with something like
mat[1, 3] += 4.0
This won't work (wasn't expecting it to either). If I try it, I get
Error: for a 'var' type a variable needs to be passed
How would I create an addition assignment operator that has this behavior? I'm guessing I need something other than a proc to accomplish this.
There are two ways you can do this:
Overload [] for var Matrix and return a var T (This requires the current devel branch of Nim):
proc `[]`(mat: Matrix, r, c: int): Matrix.T =
result = mat[mat.index(r, c)]
proc `[]`(mat: var Matrix, r, c: int): var Matrix.T =
result = mat[mat.index(r, c)]
Make [] a template instead:
template `[]`(mat: Matrix, r, c: int): expr =
mat[mat.index(r, c)]
This causes a problem when mat is not a value, but something more complex:
proc x: Matrix[float, 2, 2] =
echo "x()"
var y = x()[1, 0]
This prints x() twice.

Scala: is it possible to make a method "+" work like this: x + y = z?

I have a graph, with each vertex connected to 6 neighbors.
While constructing the graph and making declarations of the connections, I would like to use a syntax like this:
1. val vertex1, vertex2 = new Vertex
2. val index = 3 // a number between 0 and 5
3. vertex1 + index = vertex2
The result should be that vertex2 be declared assigned as index-th neighbor of vertex1, equivalent to:
4. vertex1.neighbors(index) = vertex2
While frobbing with the implementation of Vertex.+, I came up with the following:
5. def +(idx: Int) = neighbors(idx)
which, very surprisingly indeed, did not cause line 3 to be underlined red by my IDE (IntelliJIdea, BTW).
However, compilation of line 3 offsprang the following message:
error: missing arguments for method + in class Vertex;
follow this method with `_' if you want to treat it as a partially applied function
Next, I tried with an extractor, but actually, that doesn't seem to fit the case very well.
Does anybody have any clue if what I'm trying to achieve is anywhat feasible?
Thank you
You probably can achieve what you want by using := instead of =. Take a look at this illustrating repl session:
scala> class X { def +(x:X) = x; def :=(x:X) = x }
defined class X
scala> val a = new X;
a: X = X#7d283b68
scala> val b = new X;
b: X = X#44a06d88
scala> val c = new X;
c: X = X#fb88599
scala> a + b := c
res8: X = X#fb88599
As one of the comments stated, the custom = requires two parameter, for example vertex1(i)=vertex2 is dessugared to vertext.update(i,vertex2) thus forbidding the exact syntax you proposed. On the other hand := is a regular custom operator and a:=b will dessugar to a.:=(b).
Now we still have one consideration to do. Is the precedence going to work as you intent? The answer is yes, according to the Language Specification section 6.12.3. + has higher precedence than :=, so it ends up working as (a+b):=c.
Not exactly what you want, just playing with right-associativity:
scala> class Vertex {
| val neighbors = new Array[Vertex](6)
| def :=< (n: Int) = (this, n)
| def >=: (conn: (Vertex, Int)) {
| val (that, n) = conn
| that.neighbors(n) = this
| this.neighbors((n+3)%6) = that
| }
| }
defined class Vertex
scala> val a, b, c, d = new Vertex
a: Vertex = Vertex#c42aea
b: Vertex = Vertex#dd9f68
c: Vertex = Vertex#ca0c9
d: Vertex = Vertex#10fed2c
scala> a :=<0>=: b ; a :=<1>=: c ; d :=<5>=: a
scala> a.neighbors
res25: Array[Vertex] = Array(Vertex#dd9f68, Vertex#ca0c9, Vertex#10fed2c, null, null, null)

Write this Scala Matrix multiplication in Haskell [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Can you overload + in haskell?
Can you implement a Matrix class and an * operator that will work on two matrices?:
scala> val x = Matrix(3, 1,2,3,4,5,6)
x: Matrix =
[1.0, 2.0, 3.0]
[4.0, 5.0, 6.0]
scala> x*x.transpose
res0: Matrix =
[14.0, 32.0]
[32.0, 77.0]
and just so people don't say that it's hard, here is the Scala implementation (courtesy of Jonathan Merritt):
class Matrix(els: List[List[Double]]) {
/** elements of the matrix, stored as a list of
its rows */
val elements: List[List[Double]] = els
def nRows: Int = elements.length
def nCols: Int = if (elements.isEmpty) 0
else elements.head.length
/** all rows of the matrix must have the same
number of columns */
require(elements.forall(_.length == nCols))
/* Add to each elem of matrix */
private def addRows(a: List[Double],
b: List[Double]):
List[Double] =
List.map2(a,b)(_+_)
private def subRows(a: List[Double],
b: List[Double]):List[Double] =
List.map2(a,b)(_-_)
def +(other: Matrix): Matrix = {
require((other.nRows == nRows) &&
(other.nCols == nCols))
new Matrix(
List.map2(elements, other.elements)
(addRows(_,_))
)
}
def -(other: Matrix): Matrix = {
require((other.nRows == nRows) &&
(other.nCols == nCols))
new Matrix(
List.map2(elements, other.elements)
(subRows(_,_))
)
}
def transpose(): Matrix = new Matrix(List.transpose(elements))
private def dotVectors(a: List[Double],
b: List[Double]): Double = {
val multipliedElements =
List.map2(a,b)(_*_)
(0.0 /: multipliedElements)(_+_)
}
def *(other: Matrix): Matrix = {
require(nCols == other.nRows)
val t = other.transpose()
new Matrix(
for (row <- elements) yield {
for (otherCol <- t.elements)
yield dotVectors(row, otherCol)
}
)
override def toString(): String = {
val rowStrings =
for (row <- elements)
yield row.mkString("[", ", ", "]")
rowStrings.mkString("", "\n", "\n")
}
}
/* Matrix constructor from a bunch of numbers */
object Matrix {
def apply(nCols: Int, els: Double*):Matrix = {
def splitRowsWorker(
inList: List[Double],
working: List[List[Double]]):
List[List[Double]] =
if (inList.isEmpty)
working
else {
val (a, b) = inList.splitAt(nCols)
splitRowsWorker(b, working + a)
}
def splitRows(inList: List[Double]) =
splitRowsWorker(inList, List[List[Double]]())
val rows: List[List[Double]] =
splitRows(els.toList)
new Matrix(rows)
}
}
EDIT I understood that strictly speaking the answer is No: overloading * is not possible without side-effects of defining also a + and others or special tricks. The numeric-prelude package describes it best:
In some cases, the hierarchy is not finely-grained enough: Operations
that are often defined independently are lumped together. For
instance, in a financial application one might want a type "Dollar",
or in a graphics application one might want a type "Vector". It is
reasonable to add two Vectors or Dollars, but not, in general,
reasonable to multiply them. But the programmer is currently forced to
define a method for '(*)' when she defines a method for '(+)'.
It'll be perfectly safe with a smart constructor and stored dimensions. Of course there are no natural implementations for the operations signum and fromIntegral (or maybe a diagonal matrix would be fine for the latter).
module Matrix (Matrix(),matrix,matrixTranspose) where
import Data.List (transpose)
data Matrix a = Matrix {matrixN :: Int,
matrixM :: Int,
matrixElems :: [[a]]}
deriving (Show, Eq)
matrix :: Int -> Int -> [[a]] -> Matrix a
matrix n m vals
| length vals /= m = error "Wrong number of rows"
| any (/=n) $ map length vals = error "Column length mismatch"
| otherwise = Matrix n m vals
matrixTranspose (Matrix m n vals) = matrix n m (transpose vals)
instance Num a => Num (Matrix a) where
(+) (Matrix m n vals) (Matrix m' n' vals')
| m/=m' = error "Row number mismatch"
| n/=n' = error "Column number mismatch"
| otherwise = Matrix m n (zipWith (zipWith (+)) vals vals')
abs (Matrix m n vals) = Matrix m n (map (map abs) vals)
negate (Matrix m n vals) = Matrix m n (map (map negate) vals)
(*) (Matrix m n vals) (Matrix n' p vals')
| n/=n' = error "Matrix dimension mismatch in multiplication"
| otherwise = let tvals' = transpose vals'
dot x y = sum $ zipWith (*) x y
result = map (\col -> map (dot col) tvals') vals
in Matrix m p result
Test it in ghci:
*Matrix> let a = matrix 3 2 [[1,0,2],[-1,3,1]]
*Matrix> let b = matrix 2 3 [[3,1],[2,1],[1,0]]
*Matrix> a*b
Matrix {matrixN = 3, matrixM = 3, matrixElems = [[5,1],[4,2]]}
Since my Num instance is generic, it even works for complex matrices out of the box:
Prelude Data.Complex Matrix> let c = matrix 2 2 [[0:+1,1:+0],[5:+2,4:+3]]
Prelude Data.Complex Matrix> let a = matrix 2 2 [[0:+1,1:+0],[5:+2,4:+3]]
Prelude Data.Complex Matrix> let b = matrix 2 3 [[3:+0,1],[2,1],[1,0]]
Prelude Data.Complex Matrix> a
Matrix {matrixN = 2, matrixM = 2, matrixElems = [[0.0 :+ 1.0,1.0 :+ 0.0],[5.0 :+ 2.0,4.0 :+ 3.0]]}
Prelude Data.Complex Matrix> b
Matrix {matrixN = 2, matrixM = 3, matrixElems = [[3.0 :+ 0.0,1.0 :+ 0.0],[2.0 :+ 0.0,1.0 :+ 0.0],[1.0 :+ 0.0,0.0 :+ 0.0]]}
Prelude Data.Complex Matrix> a*b
Matrix {matrixN = 2, matrixM = 3, matrixElems = [[2.0 :+ 3.0,1.0 :+ 1.0],[23.0 :+ 12.0,9.0 :+ 5.0]]}
EDIT: new material
Oh, you want to just override the (*) function without any Num stuff. That's possible to o but you'll have to remember that the Haskell standard library has reserved (*) for use in the Num class.
module Matrix where
import qualified Prelude as P
import Prelude hiding ((*))
import Data.List (transpose)
class Multiply a where
(*) :: a -> a -> a
data Matrix a = Matrix {matrixN :: Int,
matrixM :: Int,
matrixElems :: [[a]]}
deriving (Show, Eq)
matrix :: Int -> Int -> [[a]] -> Matrix a
matrix n m vals
| length vals /= m = error "Wrong number of rows"
| any (/=n) $ map length vals = error "Column length mismatch"
| otherwise = Matrix n m vals
matrixTranspose (Matrix m n vals) = matrix n m (transpose vals)
instance P.Num a => Multiply (Matrix a) where
(*) (Matrix m n vals) (Matrix n' p vals')
| n/=n' = error "Matrix dimension mismatch in multiplication"
| otherwise = let tvals' = transpose vals'
dot x y = sum $ zipWith (P.*) x y
result = map (\col -> map (dot col) tvals') vals
in Matrix m p result
a = matrix 3 2 [[1,2,3],[4,5,6]]
b = a * matrixTranspose
Testing in ghci:
*Matrix> b
Matrix {matrixN = 3, matrixM = 3, matrixElems = [[14,32],[32,77]]}
There. Now if a third module wants to use both the Matrix version of (*) and the Prelude version of (*) it'll have to of course import one or the other qualified. But that's just business as usual.
I could've done all of this without the Multiply type class but this implementation leaves our new shiny (*) open for extension in other modules.
Alright, there's a lot of confusion about what's happening here floating around, and it's not being helped by the fact that the Haskell term "class" does not line up with the OO term "class" in any meaningful way. So let's try to make a careful answer. This answer starts with Haskell's module system.
In Haskell, when you import a module Foo.Bar, it creates a new set of bindings. For each variable x exported by the module Foo.Bar, you get a new name Foo.Bar.x. In addition, you may:
import qualified or not. If you import qualified, nothing more happens. If you do not, an additional name without the module prefix is defined; in this case, just plain old x is defined.
change the qualification prefix or not. If you import as Alias, then the name Foo.Bar.x is not defined, but the name Alias.x is.
hide certain names. If you hide name foo, then neither the plain name foo nor any qualified name (like Foo.Bar.foo or Alias.foo) is defined.
Furthermore, names may be multiply defined. For example, if Foo.Bar and Baz.Quux both export the variable x, and I import both modules without qualification, then the name x refers to both Foo.Bar.x and Baz.Quux.x. If the name x is never used in the resulting module, this clash is ignored; otherwise, a compiler error asks you to provide more qualification.
Finally, if none of your imports mention the module Prelude, the following implicit import is added:
import Prelude
This imports the Prelude without qualification, with no additional prefix, and without hiding any names. So it defines "bare" names and names prefixed by Prelude., and nothing more.
Here ends the bare basics you need to understand about the module system. Now let's discuss the bare basics you need to understand about typeclasses.
A typeclass includes a class name, a list of type variables bound by that class, and a collection of variables with type signatures that refer to the bound variables. Here's an example:
class Foo a where
foo :: a -> a -> Int
The class name is Foo, the bound type variable is a, and there is only one variable in the collection, namely foo, with type signature a -> a -> Int. This class declares that some types have a binary operation, named foo, which computes an Int. Any type may later (even in another module) be declared to be an instance of this class: this involves defining the binary operation above, where the bound type variable a is substituted with the type you are creating an instance for. As an example, we might implement this for integers by the instance:
instance Foo Int where
foo a b = (a `mod` 76) * (b + 7)
Here ends the bare basics you need to understand about typeclasses. We may now answer your question. The only reason the question is tricky is because it falls smack dab on the intersection between two name management techniques: modules and typeclasses. Below I discuss what this means for your specific question.
The module Prelude defines a typeclass named Num, which includes in its collection of variables a variable named *. Therefore, we have several options for the name *:
If the type signature we desire happens to follow the pattern a -> a -> a, for some type a, then we may implement the Num typeclass. We therefore extend the Num class with a new instance; the name Prelude.* and any aliases for this name are extended to work for the new type. For matrices, this would look like, for example,
instance Num Matrix where
m * n = {- implementation goes here -}
We may define a different name than *.
m |*| n = {- implementation goes here -}
We may define the name *. Whether this name is defined as part of a new type class or not is immaterial. If we do nothing else, there will then be at least two definitions of *, namely, the one in the current module and the one implicitly imported from the Prelude. We have a variety of ways of dealing with this. The simplest is to explicitly import the Prelude, and ask for the name * not to be defined:
import Prelude hiding ((*))
You might alternately choose to leave the implicit import of Prelude, and use a qualified * everywhere you use it. Other solutions are also possible.
The main point I want you to take away from this is: the name * is in no way special. It is just a name defined by the Prelude, and all of the tools we have available for namespace control are available.
You can implement * as matrix multiplication by defining an instance of Num class for Matrix. But the code won't be type-safe: * (and other arithmetic operations) on matrices as you define them is not total, because of size mismatch or in case of '/' non-existence of inverse matrices.
As for 'the hierarchy is not defined precisely' - there is also Monoid type class, exactly for the cases when only one operation is defined.
There are too many things to be 'added', sometimes in rather exotic ways (think of permutation groups). Haskell designers designed to reserve arithmetical operations for different representations of numbers, and use other names for more exotic cases.