my pyspark dataframe is "Values":
+------+
|w_vote|
+------+
| 0.1|
| 0.2|
| 0.25|
| 0.3|
| 0.31|
| 0.36|
| 0.41|
| 0.5|
I want to loop to each value of a df using pyspark
My code :
out = []
for i in values.collect():
print(i)
What i basically want to do is (for (i in 1:nrow(values))
I am trying below code in pyspark but it gives result as below
Row(w_vote=0.1)
Row(w_vote=0.2)
Row(w_vote=0.25)
Row(w_vote=0.3)
Row(w_vote=0.31)
Row(w_vote=0.36)
Row(w_vote=0.41)
But i want result as 0.1, 0.2, 0.25 etc.
collect returns a Row object, which is kind of like a dict, except you access elements as attributes, not keys.
Accordingly, you can just do this:
result = [row.w_vote for row in values.collect()]
Or this:
result = [row.asDict()['w_vote'] for row in values.collect()]
As a forloop:
result = []
for row in values.collect():
result.append(row.w_vote)
Related
I'm new to Pyspark and trying to transform data
Given dataframe
Col1
A=id1a A=id2a B=id1b C=id1c B=id2b
D=id1d A=id3a B=id3b C=id2c
A=id4a C=id3c
Required:
A B C
id1a id1b id1c
id2a id2b id2c
id3a id3b id3b
id4a null null
I have tried pivot, but that gives first value.
There might be a better way , however an approach is splitting the column on spaces to create array of the entries and then using higher order functions(spark 2.4+) to split on the '=' for each entry in the splitted array .Then explode and create 2 columns one with the id and one with the value. Then we can assign a row number to each partition and groupby then pivot:
import pyspark.sql.functions as F
df1 = (df.withColumn("Col1",F.split(F.col("Col1"),"\s+")).withColumn("Col1",
F.explode(F.expr("transform(Col1,x->split(x,'='))")))
.select(F.col("Col1")[0].alias("cols"),F.col("Col1")[1].alias("vals")))
from pyspark.sql import Window
w = Window.partitionBy("cols").orderBy("cols")
final = (df1.withColumn("Rnum",F.row_number().over(w)).groupBy("Rnum")
.pivot("cols").agg(F.first("vals")).orderBy("Rnum"))
final.show()
+----+----+----+----+----+
|Rnum| A| B| C| D|
+----+----+----+----+----+
| 1|id1a|id1b|id1c|id1d|
| 2|id2a|id2b|id2c|null|
| 3|id3a|id3b|id3c|null|
| 4|id4a|null|null|null|
+----+----+----+----+----+
this is how df1 looks like after the transformation:
df1.show()
+----+----+
|cols|vals|
+----+----+
| A|id1a|
| A|id2a|
| B|id1b|
| C|id1c|
| B|id2b|
| D|id1d|
| A|id3a|
| B|id3b|
| C|id2c|
| A|id4a|
| C|id3c|
+----+----+
May be I don't know the full picture, but the data format seems to be strange. If nothing can be done at the data source, then some collects, pivots and joins will be needed. Try this.
import pyspark.sql.functions as F
test = sqlContext.createDataFrame([('A=id1a A=id2a B=id1b C=id1c B=id2b',1),('D=id1d A=id3a B=id3b C=id2c',2),('A=id4a C=id3c',3)],schema=['col1','id'])
tst_spl = test.withColumn("item",(F.split('col1'," ")))
tst_xpl = tst_spl.select(F.explode("item"))
tst_map = tst_xpl.withColumn("key",F.split('col','=')[0]).withColumn("value",F.split('col','=')[1]).drop('col')
#%%
tst_pivot = tst_map.groupby(F.lit(1)).pivot('key').agg(F.collect_list(('value'))).drop('1')
#%%
tst_arr = [tst_pivot.select(F.posexplode(coln)).withColumnRenamed('col',coln) for coln in tst_pivot.columns]
tst_fin = reduce(lambda df1,df2:df1.join(df2,on='pos',how='full'),tst_arr).orderBy('pos')
tst_fin.show()
+---+----+----+----+----+
|pos| A| B| C| D|
+---+----+----+----+----+
| 0|id3a|id3b|id1c|id1d|
| 1|id4a|id1b|id2c|null|
| 2|id1a|id2b|id3c|null|
| 3|id2a|null|null|null|
+---+----+----+----+----
I have a PySpark dataframe with a column contains Python list
id value
1 [1,2,3]
2 [1,2]
I want to remove all rows with len of the list in value column is less than 3.
So I tried:
df.filter(len(df.value) >= 3)
and indeed it does not work.
How can I filter the dataframe by the length of the inside data?
Refer to this link -
size() - It returns the length of the array or map stored in the column.
from pyspark.sql.functions import size
myValues = [(1,[1,2,3]),(2,[1,2])]
df = sqlContext.createDataFrame(myValues,['id','value'])
df.show()
+----+---------+
| id| value|
+--------------+
| 1| [1,2,3]|
| 2| [1,2]|
+----+---------+
df = df.filter(size(df.value) >= 3).show()
+----+---------+
| id| value|
+--------------+
| 1| [1,2,3]|
+----+---------+
I am working with Pyspark and trying to figure out how to do complex calculation with previous columns. I think there are generally two ways to do calculation with previous columns : Windows, and mapwithPartition. I think my problem is too complex to solve by windows, and I want the result as a sepreate row, not column. So I am trying to use mapwithpartition. I am having a trouble with syntax of this.
For instance, here is a rough draft of the code.
def change_dd(rows):
prev_rows = []
prev_rows.append(rows)
for row in rows:
new_row=[]
for entry in row:
# Testing to figure out syntax, things would get more complex
new_row.append(entry + prev_rows[0])
yield new_row
updated_rdd = select.rdd.mapPartitions(change_dd)
However, I can't access to the single data of prev_rows. Seems like prev_rows[0] is itertools.chain. How do I iterate over this prev_rows[0]?
edit
neighbor = sc.broadcast(df_sliced.where(df_sliced.id == neighbor_idx).collect()[0][:-1]).value
current = df_sliced.where(df_sliced.id == i)
def oversample_dt(dataframe):
for row in dataframe:
new_row = []
for entry, neigh in zip(row, neighbor):
if isinstance(entry, str):
if scale < 0.5:
new_row.append(entry)
else:
new_row.append(neigh)
else:
if isinstance(entry, int):
new_row.append(int(entry + (neigh - entry) * scale))
else:
new_row.append(entry + (neigh - entry) * scale)
yield new_row
sttt = time.time()
sample = current.rdd.mapPartitions(oversample_dt).toDF(schema)
In the end, I ended up doing like this for now, but I really don't want to use collect in the first row. If someone knows how to fix this / point out any problem in using pyspark, please tell me.
edit2
--Suppose Alice, and its neighbor Alice_2
scale = 0.4
+---+-------+--------+
|age| name | height |
+---+-------+--------+
| 10| Alice | 170 |
| 11|Alice_2| 175 |
+---+-------+--------+
Then, I want a row
+---+-------+----------------------------------+
|age | name | height |
+---+-------+---------------------------------+
| 10+1*0.4 | Alice_2 | 170 + 5*0.4 |
+---+-------+---------------------------------+
Why not using dataframes?
Add a column to the dataframe with the previous values using window functions like this:
from pyspark.sql import SparkSession, functions
from pyspark.sql.window import Window
spark_session = SparkSession.builder.getOrCreate()
df = spark_session.createDataFrame([{'name': 'Alice', 'age': 1}, {'name': 'Alice_2', 'age': 2}])
df.show()
+---+-------+
|age| name|
+---+-------+
| 1| Alice|
| 2|Alice_2|
+---+-------+
window = Window.partitionBy().orderBy('age')
df = df.withColumn("age-1", functions.lag(df.age).over(window))
df.show()
You can use this function for every column
+---+-------+-----+
|age| name|age-1|
+---+-------+-----+
| 1| Alice| null|
| 2|Alice_2| 1|
+---+-------+-----+
An then just make your calculus
And if you want to use rdd, then just use df.rdd
The following I am attempting in Scala-Spark.
I'm hoping someone can give me some guidance on how to tackle this problem or provide me with some resources to figure out what I can do.
I have a dateCountDF with a count corresponding to a date. I would like to randomly select a certain number of entries for each dateCountDF.month from another Dataframe entitiesDF where dateCountDF.FirstDate<entitiesDF.Date && entitiesDF.Date <= dateCountDF.LastDate and then place all the results into a new Dataframe. See Bellow for Data Example
I'm not at all sure how to approach this problem from a Spark-SQl or Spark-MapReduce perspective. The furthest I got was the naive approach, where I use a foreach on a dataFrame and then refer to the other dataframe within the function. But this doesn't work because of the distributed nature of Spark.
val randomEntites = dateCountDF.foreach(x => {
val count:Int = x(1).toString().toInt
val result = entitiesDF.take(count)
return result
})
DataFrames
**dateCountDF**
| Date | Count |
+----------+----------------+
|2016-08-31| 4|
|2015-12-31| 1|
|2016-09-30| 5|
|2016-04-30| 5|
|2015-11-30| 3|
|2016-05-31| 7|
|2016-11-30| 2|
|2016-07-31| 5|
|2016-12-31| 9|
|2014-06-30| 4|
+----------+----------------+
only showing top 10 rows
**entitiesDF**
| ID | FirstDate | LastDate |
+----------+-----------------+----------+
| 296| 2014-09-01|2015-07-31|
| 125| 2015-10-01|2016-12-31|
| 124| 2014-08-01|2015-03-31|
| 447| 2017-02-01|2017-01-01|
| 307| 2015-01-01|2015-04-30|
| 574| 2016-01-01|2017-01-31|
| 613| 2016-04-01|2017-02-01|
| 169| 2009-08-23|2016-11-30|
| 205| 2017-02-01|2017-02-01|
| 433| 2015-03-01|2015-10-31|
+----------+-----------------+----------+
only showing top 10 rows
Edit:
For clarification.
My inputs are entitiesDF and dateCountDF. I want to loop through dateCountDF and for each row I want to select a random number of entities in entitiesDF where dateCountDF.FirstDate<entitiesDF.Date && entitiesDF.Date <= dateCountDF.LastDate
To select random you do like this in scala
import random
def sampler(df, col, records):
# Calculate number of rows
colmax = df.count()
# Create random sample from range
vals = random.sample(range(1, colmax), records)
# Use 'vals' to filter DataFrame using 'isin'
return df.filter(df[col].isin(vals))
select random number of rows you want store in dataframe and the add this data in the another dataframe for this you can use unionAll.
also you can refer this answer
I have 2 columns say ID, value Id is of type Int and value is of type List[String].
Ids are repeating so to make them unique I apply GroupBy("id") on My DataFrame now my problem is I want to append the value with each other and value column must be distinct.
Example :- i have a data like
+---+---+
| id| v |
+---+---+
| 1|[a]|
| 1|[b]|
| 1|[a]|
| 2|[e]|
| 2|[b]|
+---+---+
and i want my output like this
+---+---+--
| id| v |
+---+-----+
| 1|[a,b]|
| 2|[e,b]|
i tried this :-
val uniqueDF = df.groupBy("id").agg(collect_list("v"))
uniqueDf.map{row => (row.getInt(0),
row.getAsSeq[String].toList.distinct)}
Can I do the same after groupBy() say in agg() or something I do not want to apply map operation
thanks
val uniqueDF = df.groupBy("id").agg(collect_set("v"))
Set will have only unique values