Correctness of networkx line digraph construction - networkx

The following Python code:
import networkx as nx
g = nx.DiGraph()
g.add_nodes_from([0, 1])
g.add_edges_from([(0,0), (0,1), (1,0), (1,1)])
nx.write_dot(g, 'g.dot')
gl = nx.line_graph(g)
nx.write_dot(gl, 'gl.dot')
creates the following dot format graphs:
--- g.dot ---
digraph {
0 -> 0;
0 -> 1;
1 -> 0;
1 -> 1;
}
--- gl.dot ---
strict digraph {
"(0, 1)" -> "(1, 1)";
"(1, 0)" -> "(0, 0)";
"(0, 0)" -> "(0, 1)";
"(1, 1)" -> "(1, 0)";
}
Should the edges:
"(1, 0)" -> "(0, 1)";
"(0, 1)" -> "(1, 0)";
"(0, 0)" -> "(1, 1)";
"(1, 1)" -> "(0, 0)";
be in the line graph construction?
http://en.wikipedia.org/wiki/Line_graph#Line_digraph

nx.line_graph creates a strict DiGraph. Strict means there are no loops and no repeated edges. "(0, 1)" -> "(1, 0)" is a loop, so it is not included. In other words, "Two vertices representing directed edges from u to v and from w to x in G are connected by an edge from uv to wx in the strict line digraph when v = w and u != x". (Quote from Wikipedia - my insertions are bold).

The NetworkX line digraph code doesn't add isolates for self-loops nor does it allow creation of a self-loop in the line graph (loop of length 2 in original graph).
This may be a bug or a feature. If someone could point me to the definition of the line graph for these cases maybe we can at least clarify the documentation or update the code.
Note that the dot file keyword "strict" in this case is basically true but just a happy coincidence that the code that writes dot files checks for self loops and multi-edges and uses "strict" if it doesn't find any.
Meantime here is a way to generate a directed line graph allowing both kinds of loops:
import networkx as nx
def directed_line_graph(G):
L = nx.DiGraph()
for u, adj in G.adjacency_iter():
for v in adj:
L.add_node((u,v)) # add self loops and single edges as nodes
for w in G[v]: # neighbors of v
if not (u == v and v == w):
L.add_edge((u,v),(v,w))
return L
g = nx.DiGraph()
g.add_edges_from([(0,0), (0,1), (1,0), (1,1)])
gl = directed_line_graph(g)
for e in gl.edges():
print e
# ((0, 1), (1, 0))
# ((0, 1), (1, 1))
# ((1, 0), (0, 1))
# ((1, 0), (0, 0))
# ((0, 0), (0, 1))
# ((1, 1), (1, 0))

Related

How to pair each element of a Seq with the rest?

I'm looking for an elegant way to combine every element of a Seq with the rest for a large collection.
Example: Seq(1,2,3).someMethod should produce something like
Iterator(
(1,Seq(2,3)),
(2,Seq(1,3)),
(3,Seq(1,2))
)
Order of elements doesn't matter. It doesn't have to be a tuple, a Seq(Seq(1),Seq(2,3)) is also acceptable (although kinda ugly).
Note the emphasis on large collection (which is why my example shows an Iterator).
Also note that this is not combinations.
Ideas?
Edit:
In my use case, the numbers are expected to be unique. If a solution can eliminate the dupes, that's fine, but not at additional cost. Otherwise, dupes are acceptable.
Edit 2: In the end, I went with a nested for-loop, and skipped the case when i == j. No new collections were created. I upvoted the solutions that were correct and simple ("simplicity is the ultimate sophistication" - Leonardo da Vinci), but even the best ones are quadratic just by the nature of the problem, and some create intermediate collections by usage of ++ that I wanted to avoid because the collection I'm dealing with has close to 50000 elements, 2.5 billion when quadratic.
The following code has constant runtime (it does everything lazily), but accessing every element of the resulting collections has constant overhead (when accessing each element, an index shift must be computed every time):
def faceMap(i: Int)(j: Int) = if (j < i) j else j + 1
def facets[A](simplex: Vector[A]): Seq[(A, Seq[A])] = {
val n = simplex.size
(0 until n).view.map { i => (
simplex(i),
(0 until n - 1).view.map(j => simplex(faceMap(i)(j)))
)}
}
Example:
println("Example: facets of a 3-dimensional simplex")
for ((i, v) <- facets((0 to 3).toVector)) {
println(i + " -> " + v.mkString("[", ",", "]"))
}
Output:
Example: facets of a 3-dimensional simplex
0 -> [1,2,3]
1 -> [0,2,3]
2 -> [0,1,3]
3 -> [0,1,2]
This code expresses everything in terms of simplices, because "omitting one index" corresponds exactly to the face maps for a combinatorially described simplex. To further illustrate the idea, here is what the faceMap does:
println("Example: how `faceMap(3)` shifts indices")
for (i <- 0 to 5) {
println(i + " -> " + faceMap(3)(i))
}
gives:
Example: how `faceMap(3)` shifts indices
0 -> 0
1 -> 1
2 -> 2
3 -> 4
4 -> 5
5 -> 6
The facets method uses the faceMaps to create a lazy view of the original collection that omits one element by shifting the indices by one starting from the index of the omitted element.
If I understand what you want correctly, in terms of handling duplicate values (i.e., duplicate values are to be preserved), here's something that should work. Given the following input:
import scala.util.Random
val nums = Vector.fill(20)(Random.nextInt)
This should get you what you need:
for (i <- Iterator.from(0).take(nums.size)) yield {
nums(i) -> (nums.take(i) ++ nums.drop(i + 1))
}
On the other hand, if you want to remove dups, I'd convert to Sets:
val numsSet = nums.toSet
for (num <- nums) yield {
num -> (numsSet - num)
}
seq.iterator.map { case x => x -> seq.filter(_ != x) }
This is quadratic, but I don't think there is very much you can do about that, because in the end of the day, creating a collection is linear, and you are going to need N of them.
import scala.annotation.tailrec
def prems(s : Seq[Int]):Map[Int,Seq[Int]]={
#tailrec
def p(prev: Seq[Int],s :Seq[Int],res:Map[Int,Seq[Int]]):Map[Int,Seq[Int]] = s match {
case x::Nil => res+(x->prev)
case x::xs=> p(x +: prev,xs, res+(x ->(prev++xs)))
}
p(Seq.empty[Int],s,Map.empty[Int,Seq[Int]])
}
prems(Seq(1,2,3,4))
res0: Map[Int,Seq[Int]] = Map(1 -> List(2, 3, 4), 2 -> List(1, 3, 4), 3 -> List(2, 1, 4),4 -> List(3, 2, 1))
I think you are looking for permutations. You can map the resulting lists into the structure you are looking for:
Seq(1,2,3).permutations.map(p => (p.head, p.tail)).toList
res49: List[(Int, Seq[Int])] = List((1,List(2, 3)), (1,List(3, 2)), (2,List(1, 3)), (2,List(3, 1)), (3,List(1, 2)), (3,List(2, 1)))
Note that the final toList call is only there to trigger the evaluation of the expressions; otherwise, the result is an iterator as you asked for.
In order to get rid of the duplicate heads, toMap seems like the most straight-forward approach:
Seq(1,2,3).permutations.map(p => (p.head, p.tail)).toMap
res50: scala.collection.immutable.Map[Int,Seq[Int]] = Map(1 -> List(3, 2), 2 -> List(3, 1), 3 -> List(2, 1))

Does PureScript have a pipe operator?

Coming from the F# world, I am used to using |> to pipe data into functions:
[1..10] |> List.filter (fun n -> n % 2 = 0) |> List.map (fun n -> n * n);
I assume that PureScript, being inspired by Haskell, has something similar.
How do I use the pipe operator in PureScript?
Yes, you can use # which is defined in Prelude.
Here is your example, rewritten using #:
http://try.purescript.org/?gist=0448c53ae7dc92278ca7c2bb3743832d&backend=core
module Main where
import Prelude
import Data.List ((..))
import Data.List as List
example = 1..10 # List.filter (\n -> n `mod` 2 == 0)
# map (\n -> n * n)
Here's one way to define the |> operator for use in PureScript; it's defined in exactly the same way as # - i.e. with the same precedence and associativity:-
pipeForwards :: forall a b. a -> (a -> b) -> b
pipeForwards x f = f x
infixl 1 pipeForwards as |>

Cannot understand 'functions as arguments' recursion

I'm taking a functional programming languages course and I'm having difficulty understanding recursion within the context of 'functions as arguments'
fun n_times(f , n , x) =
if n=0
then x
else f (n_times(f , n - 1 , x))
fun double x = x+x;
val x1 = n_times(double , 4 , 7);
the value of x1 = 112
This doubles 'x' 'n' times so 7 doubled 4 times = 112
I can understand simpler recursion patterns such as adding numbers in a list, or 'power of' functions but I fail to understand how this function 'n_times' evaluates by calling itself ? Can provide an explanation of how this function works ?
I've tagged with scala as I'm taking this course to improve my understanding of scala (along with functional programming) and I think this is a common pattern so may be able to provide advice ?
If n is 0, x is returned.
Otherwise, f (n_times(f , n - 1 , x)) is returned.
What does n_times do? It takes the result of calling f with x, n times, or equivalently: calls f with the result of n_times(f, n - 1, x) (calling f n-1 times on x).
Note by calling f i mean for example:
calling f 3 times: f(f(f(x)))
calling f 2 times: f(f(x))
Just expand by hand. I'm going to call n_times nx to save space.
The core operation is
nx(f, n, x) -> f( nx(f, n-1, x))
terminating with
nx(f, 0, x) -> x
So, of course,
nx(f, 1, x) -> f( nx(f, 0, x) ) -> f( x )
nx(f, 2, x) -> f( nx(f, 1, x) ) -> f( f( x ) )
...
nx(f, n, x) -> f( nx(f,n-1,x) ) -> f( f( ... f( x ) ... ) )
Function n_times has a base case when n = 0 and an inductive case otherwise. You recurse on the inductive case until terminating on the base case.
Here is an illustrative trace:
n_times(double, 4, 7)
~> double (n_times(double, 3, 7)) (* n = 4 > 0, inductive case *)
~> double (double (n_times(double, 2, 7))) (* n = 3 > 0, inductive case *)
~> double (double (double (n_times(double, 1, 7)))) (* n = 2 > 0, inductive case *)
~> double (double (double (double (n_times(double, 0, 7))))) (* n = 1 > 0, inductive case *)
~> double (double (double (double 7))) (* n = 0, base case *)
~> double (double (double 14))
~> double (double 28)
~> double 56
~> 112
It is the same recursion thinking what you know already, just mixed with another concept: higher order functions.
n_times gets a function (f) as a parameter, so n_times is a higher order function, which in turn is capable to apply this f function in his body. In effect that is his job, apply f n times to x.
So how you apply f n times to x? Well, if you applied n-1 times
n_times(f , n - 1 , x)
, then you apply once more.
f (n_times(f , n - 1 , x))
You have to stop the recursion, as usual, that is the n=0 case with x.

Scala best practices: mapping 2D data

What would be the best way in Scala to do the following code in Java in proper functional way?
LinkedList<Item> itemsInRange = new LinkedList<Item>();
for (int y = 0; y < height(); y++) {
for (int x = 0; x < width(); x++) {
Item it = myMap.getItemAt(cy + y, cx + x);
if (it != null)
itemsInRange.add(it);
}
}
// iterate over itemsInRange later
Over course, it can be translated directly into Scala in imperative way:
val itemsInRange = new ListBuffer[Item]
for (y <- 0 until height) {
for (x <- 0 until width) {
val it = tileMap.getItemAt(cy + x, cx + x)
if (!it.isEmpty)
itemsInRange.append(it.get)
}
}
But I'd like to do it in proper, functional way.
I presume that there should be map operation over some sort of 2D range. Ideally, map would execute a function that would get x and y as input parameters and output Option[Item]. After that I'll get something like Iterable[Option[Item]] and flatten over it will yield Iterable[Item]. If I'm right, the only missing piece of a puzzle is doing that map operation on 2D ranges in some way.
You can do this all in one nice step:
def items[A](w: Int, h: Int)(at: ((Int, Int)) => Option[A]): IndexedSeq[A] =
for {
x <- 0 until w
y <- 0 until h
i <- at(x, y)
} yield i
Now say for example we have this representation of symbols on a four-by-four board:
val tileMap = Map(
(0, 0) -> 'a,
(1, 0) -> 'b,
(3, 2) -> 'c
)
We just write:
scala> items(4, 4)(tileMap.get)
res0: IndexedSeq[Symbol] = Vector('a, 'b, 'c)
Which I think is what you want.

Generate possible combinations

I have several hash maps I need to generate combinations of:
A: [x->1, y->2,...]
B: [x->1, a->4,...]
C: [x->1, b->5,...]
...
some possible combinations:
A+B; A; A+C; A+B+C...
For each combination I need to produce the joint hashmap and perform an operation of key-value pairs with the same key in both hash maps.
All I could come up with was using a binary counter and mapping the digits to the respective hash map:
001 -> A
101 -> A,C
...
Although this solution works, the modulo operations are time consuming when I have more than 100 hashmaps. I'm new to Scala but I believe that there must be a better way to achieve this?
Scala sequences have a combinations function. This gives you combinations for choosing a certain number from the total. From you question it looks like you want to choose all different numbers, so your code in theory could be something like:
val elements = List('a, 'b, 'c, 'd)
(1 to elements.size).flatMap(elements.combinations).toList
/* List[List[Symbol]] = List(List('a), List('b), List('c), List('d), List('a, 'b),
List('a, 'c), List('a, 'd), List('b, 'c), List('b, 'd), List('c, 'd),
List('a, 'b, 'c), List('a, 'b, 'd), List('a, 'c, 'd), List('b, 'c, 'd),
List('a, 'b, 'c, 'd)) */
But as pointed out, all combinations will be too many. With 100 elements, choosing 2 from 100 will give you 4950 combinations, 3 will give you 161700, 4 will give you 3921225, and 5 will likely give you an overflow error. So if you just keep the argument for combinations to 2 or 3 you should be OK.
Well, think of how many combinations there are of your maps: suppose you have N maps.
(the maps individually) + (pairs of maps) + (triples of maps) + ... + (all the maps)
Which is of course
(N choose 1) + (N choose 2) + ... + (N choose N-1)
Where N choose M is defined as:
N! / (M! * (N-M)!)
For N=100 and M=50, N choose M is over 100,000,000,000,000,000,000,000,000,000 so "time consuming" really doesn't do justice to the problem!
Oh, and that assumes that ordering is irrelevant - that is that A + B is equal to B + A. If that assumption is wrong, you are faced with significantly more permutations than there are particles in the visible universe
Why scala might help with this problem: its parallel collections framework!
Following up on your idea to use integers to represent bitsets. Are you using the actual modulo operator? You can also use bitmasks to check whether some number is in a bitset. (Note that on the JVM they are both one instruction operations, so who knows what's happening there.)
Another potential major improvement is that, since your operation on the range of the maps is associative, you can save computations by reusing the previous ones. For example, if you combine A,B,C but have already combined, say, A,C into AC, for instance, you can just combine B with AC.
The following code implements both ideas:
type MapT = Map[String,Int] // for conciseness later
#scala.annotation.tailrec
def pow2(i : Int, acc : Int = 1) : Int = {
// for reasonably sized ints...
if(i <= 0) acc else pow2(i - 1, 2 * acc)
}
// initial set of maps
val maps = List(
Map("x" -> 1, "y" -> 2),
Map("x" -> 1, "a" -> 4),
Map("x" -> 1, "b" -> 5)
)
val num = maps.size
// any 'op' that's commutative will do
def combine(m1 : MapT, m2 : MapT)(op : (Int,Int)=>Int) : MapT =
((m1.keySet intersect m2.keySet).map(k => (k -> op(m1(k), m2(k))))).toMap
val numCombs = pow2(num)
// precomputes all required powers of two
val masks : Array[Int] = (0 until num).map(pow2(_)).toArray
// this array will be filled, à la Dynamic Algorithm
val results : Array[MapT] = Array.fill(numCombs)(Map.empty)
// fill in the results for "combinations" of one map
for((m,i) <- maps.zipWithIndex) { results(masks(i)) = m }
val zeroUntilNum = (0 until num).toList
for(n <- 2 to num; (x :: xs) <- zeroUntilNum.combinations(n)) {
// The trick here is that we already know the result of combining the maps
// indexed by xs, we just need to compute the corresponding bitmask and get
// the result from the array later.
val known = xs.foldLeft(0)((a,i) => a | masks(i))
val xm = masks(x)
results(known | xm) = combine(results(known), results(xm))(_ + _)
}
If you print the resulting array, you get:
0 -> Map()
1 -> Map(x -> 1, y -> 2)
2 -> Map(x -> 1, a -> 4)
3 -> Map(x -> 2)
4 -> Map(x -> 1, b -> 5)
5 -> Map(x -> 2)
6 -> Map(x -> 2)
7 -> Map(x -> 3)
Of course, like everyone else pointed out, it will blow up eventually as the number of input maps increases.