Vector vs Vectors in spark [duplicate] - scala

This question already has an answer here:
Difference between spark Vectors and scala immutable Vector?
(1 answer)
Closed 4 years ago.
i am a newbie in Spark.
I am trying to read a text file that has data like:
timestamp id counter value
00:01 1 c1 0.5
00:02 5 c3 0.3
00:03 1 c2 0.1
00:04 2 c2 0.13
and transform them to:
(id, array_of_counters):
(1, [ c1 c2 ])
[ 0.5 0.1]
So, for every id, i create an 2d array, which will have every counter and every value for that specific id in the text file.
I tried to do it with Vectors but i think that what is stored in them, must be double and that i cannot add two vectors, except the case they are breeze Vectors.
Then, i found out there is a data structure called just Vector but i can't find any details about it.
So, my question is what are the main differences between Vector and Vectors in mllib?
Code:
val inputRdd = sc.textFile(inputFile).map(x => x.split(","))
val data = inputRdd.map(y => (y(1), Vector(y(2), y(3)))).reduceByKey(_++_)

I don't think a Vector is necessary or appropriate for what it appears you are trying to do here (I could be wrong, we need more specifics on what you want to accomplish). The only way it makes sense is if there is a fixed number of counters (c1, c2, etc...) for each id. If you simply want a set of every id, with it's corresponding list of counters and values, try this (I'm assuming counters are unique to each id):
val data = inputRdd
.map(y => (y(1).toLong, y(2), y(3).toDouble))
.toDF("id", "counter", "value")
.groupBy("id")
.agg(collect_list(map($"counter", $"value")))
.as[(Long, Seq[Map[String, Double]])]
.map(r => (r._1, r._2.reduce(_++_)))
//this results in a Dataset[(Long, Map[String, Double])]
A spark ml.linalg.Vector is basically an Array[Double], and would require a fixed number of counter for every record. You could tranform from the data above into a vector by ordering the Map[String, Double] by it's ._1 and creating a Vector from it's .values.
ml.linalg.Vectors is just a helper object with functions for creating Vector objects.
Factory methods for org.apache.spark.ml.linalg.Vector. We don't use the name Vector because Scala imports scala.collection.immutable.Vector by default.
It's also worth noting that mllib is intended for the older RDD API while ml is intended for the newer Dataframe/Dataset API.
Edit: RDD[(Long, Seq[(String, Double)])]
val data = inputRdd
.map(y => (y(1).toLong, Seq[(String, Double)]((y(2), y(3).toDouble))))
.reduceByKey(_++_)

Related

Spark RowMatrix columnSimilarities preserve original index

I have the following Scala Spark DataFrame df of (String, Array[Double]): Note id is of type String (A base64 hash)
id, values
"a", [0.5, 0.6]
"b", [0.1, 0.2]
...
The dataset is quite large (45k) and I would like to perform a pairwise cosine similarity using org.apache.spark.mllib.linalg.distributed.RowMatrix for performance. This works, but I am not able to identify the pairwise similarities as the indexes have turned into integers (output columns i and j). How do I use IndexedRowMatrix to preserve the original indexes?
val rows = df.select("values")
.rdd
.map(_.getAs[org.apache.spark.ml.linalg.Vector](0))
.map(org.apache.spark.mllib.linalg.Vectors.fromML)
val mat = new RowMatrix(rows)
val simsEstimate = mat.columnSimilarities()
Ideally, the end result should look something like this:
id_x, id_y, similarity
"a", "b", 0.9
"b", "c", 0.8
...
columnSimilarities() compute similarities between columns of the RowMatrix, not among rows, so "ids" you have are meaningless in this context and indices are indices in each feature vector.
Additionally these methods are designed for long, narrow and data, so an obvious approach - encode id with StringIndexer, create IndedxedRowMatrix, transpose, compute similarities, and go back (with IndexToString) simply won't do.
Your best bet here is to take crossJoin
df.as("a").crossJoin(df.as("b")).where($"a.id" <= $"b.id").select(
$"a.id" as "id_x", $"b.id" as "id_y", cosine_similarity($"a.values", $b.values")
)
where
val cosine_similarity = udf((xs: Array[Double], ys: Array[Double]) => ???)
is something you have to implement yourself.
Alternatively you can explode the data:
import org.apache.spark.sql.functions.posexplode
val long = ds.select($"id", posexplode($"values")).toDF("item", "feature", "value")
and then use method shown in Spark Scala - How to group dataframe rows and apply complex function to the groups? to compute similarity.

BucketedRandomProjectionLSHModel approxNearestNeighbors function on entire dataframe

I'm trying to evaluate an entire DataFrame through the approxNearestNeighbors function of BucketedRandomProjectionLSHModel
What I expect:
A DataFrame containing the following information:
cookieId NN
id1 [id3, id5, id7]
id2 [id8, id9]
...
Input DataFrame (daily_content_transformed):
cookieID features(a sparse vector)
id1 sparse vector with features
id2 sparse vector with features
...
This works:
val key = Vectors.sparse(37599,
Array(1,4,6,7,16,57,81,104,166,225,290,692,763),
Array(1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0))
model.approxNearestNeighbors(daily_content_transformed, key, 20).show(20, false)
It returns a DataFrame with 21 rows. I could extract the cookieId column from this DataFrame and store it in the expected DataFrame.
Where I'm stuck:
instead of hard coding the key to retrieve NN from, run the method for every row in the input dataframe, and make a dataframe as expected above
Any help?
Edit in reply to first response:
After playing around with the suggestion to use approxSimilarityJoin instead of approxNearestNeighbors I came to the following conclusions:
the suggested solution works well for daily_content_transformed.limit(3000)
starting from daily_content_transformed.limit(5000), my spark job terminates with an java.lang.OutOfMemoryError.
my input table contains +- 800 000 unique cookieID's (rows).
Although the suggested solution works for small inputs, scalability is an issue.
BucketedRandomProjectionLSHModel doesn't provide required API. I think you approximate it using approxSimilarityJoin:
import org.apache.spark.sql.functions.{struct, udf, collect_list. sort_array}
val threshold: Double
val n: Int
def take(n: Int) = udf((xs: Seq[String]) => xs.take(n))
model
.approxNearestNeighbors(
daily_content_transformed.alias("left"),
daily_content_transformed.alias("right"))
.groupBy($"datasetA.id" as "cookieId")
// Collect pairs (dist, id)
.agg(collect_list(struct($"distCol", $"datasetB.id" as "id") as "NN"))
// Sort by dist, drop dist and take n
.withColumn("NN" take(n)(sort_array($"NN", false).getItem("id")))
This guarantees to preserve at most n neighbors.

How to transform Array[(Double, Double)] into Array[Double] in Scala?

I'm using MLlib of Spark (v1.1.0) and Scala to do k-means clustering applied to a file with points (longitude and latitude).
My file contains 4 fields separated by comma (the last two are the longitude and latitude).
Here, it's an example of k-means clustering using Spark:
https://spark.apache.org/docs/1.1.0/mllib-clustering.html
What I want to do is to read the last two fields of my files that are in a specific directory in HDFS, transform them into an RDD<Vector> o use this method in KMeans class:
train(RDD<Vector> data, int k, int maxIterations)
This is my code:
val data = sc.textFile("/user/test/location/*")
val parsedData = data.map(s => Vectors.dense(s.split(',').map(fields => (fields(2).toDouble,fields(3).toDouble))))
But when I run it in spark-shell I get the following error:
error: overloaded method value dense with alternatives: (values:
Array[Double])org.apache.spark.mllib.linalg.Vector (firstValue:
Double,otherValues: Double*)org.apache.spark.mllib.linalg.Vector
cannot be applied to (Array[(Double, Double)])
So, I don't know how to transform my Array[(Double, Double)] into Array[Double]. Maybe there is another way to read the two fields and convert them into RDD<Vector>, any suggestion?
Previous suggestion using flatMap was based on the assumption that you wanted to map over the elements of the array given by the .split(",") - and offered to satisfy the types, by using Array instead of Tuple2.
The argument received by the .map/.flatMap functions is an element of the original collection, so should be named 'field' (singluar) for clarity. Calling fields(2) selects the 3rd character of each of the elements of the split - hence the source of confusion.
If what you're after is the 3rd and 4th elements of the .split(",") array, converted to Double:
s.split(",").drop(2).take(2).map(_.toDouble)
or if you want all BUT the first to fields converted to Double (if there may be more than 2):
s.split(",").drop(2).map(_.toDouble)
There're two 'factory' methods for dense Vectors:
def dense(values: Array[Double]): Vector
def dense(firstValue: Double, otherValues: Double*): Vector
While the provided type above is Array[Tuple2[Double,Double]] and hence does not type-match:
(Extracting the logic above:)
val parseLineToTuple: String => Array[(Double,Double)] = s => s=> s.split(',').map(fields => (fields(2).toDouble,fields(3).toDouble))
What is needed here is to create a new Array out of the input String, like this: (again focusing only on the specific parsing logic)
val parseLineToArray: String => Array[Double] = s=> s.split(",").flatMap(fields => Array(fields(2).toDouble,fields(3).toDouble)))
Integrating that in the original code should solve the issue:
val data = sc.textFile("/user/test/location/*")
val vectors = data.map(s => Vectors.dense(parseLineToArray(s))
(You can of course inline that code, I separated it here to focus on the issue at hand)
val parsedData = data.map(s => Vectors.dense(s.split(',').flatMap(fields => Array(fields(2).toDouble,fields(3).toDouble))))

How do I run the Spark decision tree with a categorical feature set using Scala?

I have a feature set with a corresponding categoricalFeaturesInfo: Map[Int,Int]. However, for the life of me I cannot figure out how I am supposed to get the DecisionTree class to work. It will not accept anything, but a LabeledPoint as data. However, LabeledPoint requires (double, vector) where the vector requires doubles.
val LP = featureSet.map(x => LabeledPoint(classMap(x(0)),Vectors.dense(x.tail)))
// Run training algorithm to build the model
val maxDepth: Int = 3
val isMulticlassWithCategoricalFeatures: Boolean = true
val numClassesForClassification: Int = countPossibilities(labelCol)
val model = DecisionTree.train(LP, Classification, Gini, isMulticlassWithCategoricalFeatures, maxDepth, numClassesForClassification,categoricalFeaturesInfo)
The error I get:
scala> val LP = featureSet.map(x => LabeledPoint(classMap(x(0)),Vectors.dense(x.tail)))
<console>:32: error: overloaded method value dense with alternatives:
(values: Array[Double])org.apache.spark.mllib.linalg.Vector <and>
(firstValue: Double,otherValues: Double*)org.apache.spark.mllib.linalg.Vector
cannot be applied to (Array[String])
val LP = featureSet.map(x => LabeledPoint(classMap(x(0)),Vectors.dense(x.tail)))
My resources thus far:
tree config, decision tree, labeledpoint
You can first transform categories to numbers, then load data as if all features are numerical.
When you build a decision tree model in Spark, you just need to tell spark which features are categorical and also the feature's arity (the number of distinct categories of that feature) by specifying a map Map[Int, Int]() from feature indices to its arity.
For example if you have data as:
1,a,add
2,b,more
1,c,thinking
3,a,to
1,c,me
You can first transform data into numerical format as:
1,0,0
2,1,1
1,2,2
3,0,3
1,2,4
In that format you can load data to Spark. Then if you want to tell Spark the second and the third columns are categorical, you should create a map:
categoricalFeaturesInfo = Map[Int, Int]((1,3),(2,5))
The map tells us that feature with index 1 has arity 3, and feature with index 2 has artity 5. They will be considered as categorical when we build a decision tree model passing that map as a parameter of the training function:
val model = DecisionTree.trainClassifier(trainingData, numClasses, categoricalFeaturesInfo, impurity, maxDepth, maxBins)
Strings are not supported by LabeledPoint, one way to put it into a LabeledPoint is to split your data into multiple columns, considering that your strings are categorical.
So for example, if you have the following dataset:
id,String,Intvalue
1,"a",123
2,"b",456
3,"c",789
4,"a",887
Then you could split your string data, making each value of the strings into a new column
a -> 1,0,0
b -> 0,1,0
c -> 0,0,1
As you have 3 distinct values of Strings, you will convert your string column to 3 new columns, and each value will be represented by a value in this new columns.
Now your dataset will be
id,String,Intvalue
1,1,0,0,123
2,0,1,0,456
3,0,0,1,789
4,1,0,0,887
Which now you can convert into Double values and use it into your LabeledPoint.
Another way to convert your strings into a LabeledPoint is to create a distinctlist of values for each column, and convert the values of the strings into the index of that string in this list. Which is not recommended because if so, in this supposed dataset it will be
a = 0
b = 1
c = 2
But in this case the algorithms will consider a closer to b than to c, which cannot be determined.
You need to confirm the type of array x.
From the error log, it said that the item in array x is string which is not supported in spark.
Current spark Vectors can only be filled by Double.

How can I create a TF-IDF for Text Classification using Spark?

I have a CSV file with the following format :
product_id1,product_title1
product_id2,product_title2
product_id3,product_title3
product_id4,product_title4
product_id5,product_title5
[...]
The product_idX is a integer and the product_titleX is a String, example :
453478692, Apple iPhone 4 8Go
I'm trying to create the TF-IDF from my file so I can use it for a Naive Bayes Classifier in MLlib.
I am using Spark for Scala so far and using the tutorials I have found on the official page and the Berkley AmpCamp 3 and 4.
So I'm reading the file :
val file = sc.textFile("offers.csv")
Then I'm mapping it in tuples RDD[Array[String]]
val tuples = file.map(line => line.split(",")).cache
and after I'm transforming the tuples into pairs RDD[(Int, String)]
val pairs = tuples.(line => (line(0),line(1)))
But I'm stuck here and I don't know how to create the Vector from it to turn it into TFIDF.
Thanks
To do this myself (using pyspark), I first started by creating two data structures out of the corpus. The first is a key, value structure of
document_id, [token_ids]
The second is an inverted index like
token_id, [document_ids]
I'll call those corpus and inv_index respectively.
To get tf we need to count the number of occurrences of each token in each document. So
from collections import Counter
def wc_per_row(row):
cnt = Counter()
for word in row:
cnt[word] += 1
return cnt.items()
tf = corpus.map(lambda (x, y): (x, wc_per_row(y)))
The df is simply the length of each term's inverted index. From that we can calculate the idf.
df = inv_index.map(lambda (x, y): (x, len(y)))
num_documnents = tf.count()
# At this step you can also apply some filters to make sure to keep
# only terms within a 'good' range of df.
import math.log10
idf = df.map(lambda (k, v): (k, 1. + log10(num_documents/v))).collect()
Now we just have to do a join on the term_id:
def calc_tfidf(tf_tuples, idf_tuples):
return [(k1, v1 * v2) for (k1, v1) in tf_tuples for
(k2, v2) in idf_tuples if k1 == k2]
tfidf = tf.map(lambda (k, v): (k, calc_tfidf(v, idf)))
This isn't a particularly performant solution, though. Calling collect to bring idf into the driver program so that it's available for the join seems like the wrong thing to do.
And of course, it requires first tokenizing and creating a mapping from each uniq token in the vocabulary to some token_id.
If anyone can improve on this, I'm very interested.