Capture null on pyspark dataframe? - pyspark

On regular non-spark Jupyter Notebook, I write
for y in df.columns:
df["mflag"]= df["y"].isnull()
It shows True = 704. False = 24583. The count I know is correct.
Now the same source CSV file is read into Pyspark df. The variable Y is double (nullable=True). This is also correct because y is a ratio variable.
I write below to try to replicate what I did on Jupyter notebook, creating a mflag that separates 704 null from the 24583 not null.
df = df.withColumn('mflag', when(df['y'].isNull,"0").when(df['y'].isNotNull, "1").otherwise("1"))
It does not create error. But in the df, it produces 9 instances of mflag: string (nullable=false). If I use 1 instead of "1", it produces 9 integer... Useless because mflg cannot be called upon later. Obviously I only need 1 mflag. What is wrong? Jia

Related

Create multiple rows of fixed length from a data frame column in Pyspark

My input is a dataframe column in pyspark and it has only one column DETAIL_REC.
detail_df.show()
DETAIL_REC
================================
ABC12345678ABC98765543ABC98762345
detail_df.printSchema()
root
|-- DETAIL_REC: string(nullable =true)
For every 11th char/string it has to be in next row of dataframe for downstream process to consume this.
Expected output Should be multiple rows in dataframe
DETAIL_REC (No spaces lines after each record)
==============
ABC12345678
ABC98765543
ABC98762345
If you have spark 2.4+ version, we can make use of higher order functions to do it like below:
from pyspark.sql import functions as F
n = 11
output = df.withColumn("SubstrCol",F.explode((F.expr(f"""filter(
transform(
sequence(0,length(DETAIL_REC),{n})
,x-> substring(DETAIL_REC,x+1,{n}))
,y->y <> '')"""))))
output.show(truncate=False)
+---------------------------------+-----------+
|DETAIL_REC |SubstrCol |
+---------------------------------+-----------+
|ABC12345678ABC98765543ABC98762345|ABC12345678|
|ABC12345678ABC98765543ABC98762345|ABC98765543|
|ABC12345678ABC98765543ABC98762345|ABC98762345|
+---------------------------------+-----------+
Logic used:
First generate a sequence of integers starting from 0 to length of the string in steps of 11 (n)
Using transform iterate through this sequence and keep getting substrings from the original string (This keeps changing the start position.
Filter out any blank strings from the resulting array and explode this array.
For lower versions of spark, use a udf with textwrap or any other functions as addressed here:
from pyspark.sql import functions as F, types as T
from textwrap import wrap
n = 11
myudf = F.udf(lambda x: wrap(x,n),T.ArrayType(T.StringType()))
output = df.withColumn("SubstrCol",F.explode(myudf("DETAIL_REC")))
output.show(truncate=False)
+---------------------------------+-----------+
|DETAIL_REC |SubstrCol |
+---------------------------------+-----------+
|ABC12345678ABC98765543ABC98762345|ABC12345678|
|ABC12345678ABC98765543ABC98762345|ABC98765543|
|ABC12345678ABC98765543ABC98762345|ABC98762345|
+---------------------------------+-----------+

pyspark.sql.utils.ParseException error when filtering the df

I want to select all rows from a pyspark df except some rows where the array contains 1. It works with the code below in the notebook:
<pyspark df>.filter(~exists("<col name>", lambda x: x=="hello"))
But when I write it as this:
cond = '~exists("<col name>", lambda x: x=="hello")'
df = df.filter(con)
I got error as below:
pyspark.sql.utils.ParseException:
extraneous input 'x' expecting {')', ','}(line 1, pos 32)
I really can't spot any typo. Could someone give me a hint if I missed something?
Thanks, J
To pass in the conditions through variable, it needs to be written in the form of
expr str of spark sql. So it can be modified to:
cond = '!exists(col_name, x -> x == "hello")'

Why won't the exp function work in pyspark?

I'm trying to calculate odds ratios from the coefficients of a logistic regression but I'm encountering a problem best summed up by this code:
import pyspark.sql.functions as F
F.exp(1.2)
This fails with
py4j.Py4JException: Method exp([class java.lang.Double]) does not exist
An integer fails similarly. I don't get how a Double can cause a problem for the exp function?
If you have a look at the documentation for pyspark.sql.functions.exp(), it takes an input of a col object. Hence it will not work for a float value such as 1.2.
Create a dataframe or a Column object which you can use in F.exp()
Example would be:
df = df.withColumn("exp_x", F.exp(F.col("some_col_named_x")))
As #pissall mentionned, the pyspark.sql.functions.exp takes col objects as parameter, but you can use the pyspark.sql.functions.lit (introduced in version 1.3.0) to create a col object of a literal value.
from pyspark.sql.functions import exp, lit
df = df.withColumn("exp_1", exp(lit(1)))

Matrix Multiplication A^T * A in PySpark

I asked a similar question yesterday - Matrix Multiplication between two RDD[Array[Double]] in Spark - however I've decided to shift to pyspark to do this. I've made some progress loading and reformatting the data - Pyspark map from RDD of strings to RDD of list of doubles - however the matrix multiplcation is difficult. Let me share my progress first:
matrix1.txt
1.2 3.4 2.3
2.3 1.1 1.5
3.3 1.8 4.5
5.3 2.2 4.5
9.3 8.1 0.3
4.5 4.3 2.1
it's difficult to share files, however this is what my matrix1.txt file looks like. It is a space-delimited text file including the values of a matrix. Next is the code:
# do the imports for pyspark and numpy
from pyspark import SparkConf, SparkContext
import numpy as np
# loadmatrix is a helper function used to read matrix1.txt and format
# from RDD of strings to RDD of list of floats
def loadmatrix(sc):
data = sc.textFile("matrix1.txt").map(lambda line: line.split(' ')).map(lambda line: [float(x) for x in line])
return(data)
# this is the function I am struggling with, it should take a line of the
# matrix (formatted as list of floats), compute an outer product with itself
def AtransposeA(line):
# pseudocode for this would be...
# outerprod = compute line * line^transpose
# return(outerprod)
# here is the main body of my file
if __name__ == "__main__":
# create the conf, sc objects, then use loadmatrix to read data
conf = SparkConf().setAppName('SVD').setMaster('local')
sc = SparkContext(conf = conf)
mymatrix = loadmatrix(sc)
# this is pseudocode for calling AtransposeA
ATA = mymatrix.map(lambda line: AtransposeA(line)).reduce(elementwise add all the outerproducts)
# the SVD of ATA is computed below
U, S, V = np.linalg.svd(ATA)
# ...
My approach is as follows - to do matrix multiplication A^T * A, I create a function that computes outer products of rows of A. The elementwise sum of all of the outerproducts is the product I want. I then call AtransposeA() in a map function, that way is it performed on each row of the matrix, and finally I use a reduce() to add the resulting matrices.
I'm struggling thinking about how the AtransposeA function should look. How can I do an outerproduct in pyspark like this? Thanks in advance for help!
First, consider why you want to use Spark for this. It sounds like all your data fits in memory, in which case you can use numpy and pandas in a very straight-forward way.
If your data isn't structured so that rows are independent, then it probably can't be parallelized by sending groups of rows to different nodes, which is the whole point of using Spark.
Having said that... here is some pyspark (2.1.1) code that I think does what you want.
# read the matrix file
df = spark.read.csv("matrix1.txt",sep=" ",inferSchema=True)
df.show()
+---+---+---+
|_c0|_c1|_c2|
+---+---+---+
|1.2|3.4|2.3|
|2.3|1.1|1.5|
|3.3|1.8|4.5|
|5.3|2.2|4.5|
|9.3|8.1|0.3|
|4.5|4.3|2.1|
+---+---+---+
# do the sum of the multiplication that we want, and get
# one data frame for each column
colDFs = []
for c2 in df.columns:
colDFs.append( df.select( [ F.sum(df[c1]*df[c2]).alias("op_{0}".format(i)) for i,c1 in enumerate(df.columns) ] ) )
# now union those separate data frames to build the "matrix"
mtxDF = reduce(lambda a,b: a.select(a.columns).union(b.select(a.columns)), colDFs )
mtxDF.show()
+------------------+------------------+------------------+
| op_0| op_1| op_2|
+------------------+------------------+------------------+
| 152.45|118.88999999999999| 57.15|
|118.88999999999999|104.94999999999999| 38.93|
| 57.15| 38.93|52.540000000000006|
+------------------+------------------+------------------+
This seems to be the same result that you get from numpy.
a = numpy.genfromtxt("matrix1.txt")
numpy.dot(a.T, a)
array([[ 152.45, 118.89, 57.15],
[ 118.89, 104.95, 38.93],
[ 57.15, 38.93, 52.54]])

How to extract columns of data from .txt files MATLAB

I have some data in a .txt file. that are separated by commas.
for example:
1.4,2,3,4,5
2,3,4.2,5,6
24,5,2,33.4,62
what if you want the average of columns, like first column (1.4,2 and 24)? or second column(2,3 and 5)?
I think putting the column in an array and using the built in mean function would work, but so far, I am only able to extract rows, not columns
instead of making another thread, I thought i'd edit this one. I am working on getting the average of each column of the well known iris data set.
I cut a small portion of the data:
5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
delimiterln= ',';
data = importdata('iris.txt', delimiterln);
meanCol1 = mean(data(:,1))
meanCol2 = mean(data(:,2))
meanCol3 = mean(data(:,3))
meanCol4 = mean(data(:,4))
Undefined function 'sum' for input arguments of type 'cell'.
Error in mean (line 115)
y = sum(x, dim, flag)/size(x,dim);
Error in irisData(line 6)
meanCol1 = mean(data(:,1))
it looks like there is an error with handling data type...any thoughts on this? I tried getting rid of the last column, which are strings. and it seems to work without error. So i am thinking that it's because of the strings.
Use comma separated file reading function:
M = csvread(filename);
Now you have the matrix M:
col1Mean=mean(M(:,1));