the usage of addPyFile in pyspark is ambiguous - pyspark

There is a addPyFile method in pyspark, But I don't know how to use it and its usage on the net is very few. I think the addPyFile can pass python files to spark nodes, and I tested it:
sc.addPyFile("/sparkLearning/tools/redis_util.py")
rdd = sc.parallelize([('a', 10), ('b', 20), ('c', 30)])
def map_1(row):
redis_util = redis_util.RedisUtil()
k = row[0]
v = row[1]
redis_util.set(name=k, value=v)
rdd.map(map_1).collect()
It went wrong: UnboundLocalError: local variable 'redis_util' referenced before assignment, But how can I get the usage of addPyFile?

Related

Is there a way to save a csv(or any other format) without overwritting in pandas udf

I am trying to fit a transformation to a pyspark sql dataframe selected columns . As there is no native pyspark function , I am using an UDF. But I also have to save few metrics inside the function for that I am trying to do a 'to_csv' and using mode = append but as there is groupby statement to execute this, only the last set result is getting stored in csv result. Is there a way to store in csv (or any other format) such that all groupby results are saved in the file and not the last one. I have tried many things but nothing works as of now.
Note: Only the last code line before return df has to be changed such that it can keep appending without removing any data that exists in the file.
My code
def gamma_transform(df):
#features for fit
col = ['col1', 'col2', 'col3']
#create optimal table
optimal_k_table_subset = pd.DataFrame()
optimal_k_table = pd.DataFrame()
for var in col:
gamma_sk_test_cv = cross_validate(gamma_cdf(), df[var], df['abc'], cv=GroupKFold(n_splits=5), groups = df['xyz'], scoring='neg_root_mean_squared_error', return_estimator=True)
optimal_k = gamma_sk_test_cv['test_score'].argmin()
alpha, beta, phi = [gamma_sk_test_cv['estimator'][optimal_k].a, gamma_sk_test_cv['estimator'][optimal_k].b, gamma_sk_test_cv['estimator'][optimal_k].p]
df[var] = gamma_cdf(alpha, beta, phi).transform(df[var])
#Save optimal values
row = {
'brand': df['name'].unique()[0],
'promo_channel':var,
'segment_name':df['req'].unique()[0],
'segment_value':df['res'].unique()[0],
'alpha':alpha,
'beta':beta,
'phi':phi
}
optimal_k_table_subset = pd.DataFrame([row])
optimal_k_table_subset.to_csv(output_file_path, mode = 'a', header = True, index = False)
optimal_k_table = pd.concat([optimal_k_table, optimal_k_table_subset], axis=0).reset_index(drop=True)
# This is the needed section
# This save only gives the last set result and doesn't append all
# The alignment is outsie for loop. If I keep inside for loop only once record gets saved
optimal_k_table.to_csv(output_file_path, mode = 'a', header = True, index = False)
return df
#Applying the function to a pyspark sql dataframe
df = data.groupby('name', 'req', 'res').applyInPandas(gamma_transform, schema = schema,)

How to union multiple dynamic inputs in Palantir Foundry?

I want to Union multiple datasets in Palantir Foundry, the name of the datasets are dynamic so I would not be able to give the dataset names in transform_df() statically. Is there a way I can dynamically take multiple inputs into transform_df and union all of those dataframes?
I tried looping over the datasets like:
li = ['dataset1_path', 'dataset2_path']
union_df = None
for p in li:
#transforms_df(
my_input = Input(p),
Output(p+"_output")
)
def my_compute_function(my_input):
return my_input
if union_df is None:
union_df = my_compute_function
else:
union_df = union_df.union(my_compute_function)
But, this doesn't generate the unioned output.
This should be able to work for you with some changes, this is an example of dynamic dataset with json files, your situation would maybe be only a little different. Here is a generalized way you could be doing dynamic json input datasets that should be adaptable to any type of dynamic input file type or internal to foundry dataset that you can specify. This generic example is working on a set of json files uploaded to a dataset node in the platform. This should be fully dynamic. Doing a union after this should be a simple matter.
There's some bonus logging going on here as well.
Hope this helps
from transforms.api import Input, Output, transform
from pyspark.sql import functions as F
import json
import logging
def transform_generator():
transforms = []
transf_dict = {## enter your dynamic mappings here ##}
for value in transf_dict:
#transform(
out=Output(' path to your output here '.format(val=value)),
inpt=Input(" path to input here ".format(val=value)),
)
def update_set(ctx, inpt, out):
spark = ctx.spark_session
sc = spark.sparkContext
filesystem = list(inpt.filesystem().ls())
file_dates = []
for files in filesystem:
with inpt.filesystem().open(files.path) as fi:
data = json.load(fi)
file_dates.append(data)
logging.info('info logs:')
logging.info(file_dates)
json_object = json.dumps(file_dates)
df_2 = spark.read.option("multiline", "true").json(sc.parallelize([json_object]))
df_2 = df_2.withColumn('upload_date', F.current_date())
df_2.drop_duplicates()
out.write_dataframe(df_2)
transforms.append(update_logs)
return transforms
TRANSFORMS = transform_generator()
So this question breaks down in two questions.
How to handle transforms with programatic input paths
To handle transforms with programatic inputs, it is important to remember two things:
1st - Transforms will determine your inputs and outputs at CI time. Which means that you can have python code that generates transforms, but you cannot read paths from a dataset, they need to be hardcoded into your python code that generates the transform.
2nd - Your transforms will be created once, during the CI execution. Meaning that you can't have an increment or special logic to generate different paths whenever the dataset builds.
With these two premises, like in your example or #jeremy-david-gamet 's (ty for the reply, gave you a +1) you can have python code that generates your paths at CI time.
dataset_paths = ['dataset1_path', 'dataset2_path']
for path in dataset_paths:
#transforms_df(
my_input = Input(path),
Output(f"{path}_output")
)
def my_compute_function(my_input):
return my_input
However to union them you'll need a second transform to execute the union, you'll need to pass multiple inputs, so you can use *args or **kwargs for this:
dataset_paths = ['dataset1_path', 'dataset2_path']
all_args = [Input(path) for path in dataset_paths]
all_args.append(Output("path/to/unioned_dataset"))
#transforms_df(*all_args)
def my_compute_function(*args):
input_dfs = []
for arg in args:
# there are other arguments like ctx in the args list, so we need to check for type. You can also use kwargs for more determinism.
if isinstance(arg, pyspark.sql.DataFrame):
input_dfs.append(arg)
# now that you have your dfs in a list you can union them
# Note I didn't test this code, but it should be something like this
...
How to union datasets with different schemas.
For this part there are plenty of Q&A out there on how to union different dataframes in spark. Here is a short code example copied from https://stackoverflow.com/a/55461824/26004
from pyspark.sql import SparkSession, HiveContext
from pyspark.sql.functions import lit
from pyspark.sql import Row
def customUnion(df1, df2):
cols1 = df1.columns
cols2 = df2.columns
total_cols = sorted(cols1 + list(set(cols2) - set(cols1)))
def expr(mycols, allcols):
def processCols(colname):
if colname in mycols:
return colname
else:
return lit(None).alias(colname)
cols = map(processCols, allcols)
return list(cols)
appended = df1.select(expr(cols1, total_cols)).union(df2.select(expr(cols2, total_cols)))
return appended
Since inputs and outputs are determined at CI time, we cannot form true dynamic inputs. We will have to somehow point to specific datasets in the code. Assuming the paths of datasets share the same root, the following seems to require minimum maintenance:
from transforms.api import transform_df, Input, Output
from functools import reduce
datasets = [
'dataset1',
'dataset2',
'dataset3',
]
inputs = {f'inp{i}': Input(f'input/folder/path/{x}') for i, x in enumerate(datasets)}
kwargs = {
**{'output': Output('output/folder/path/unioned_dataset')},
**inputs
}
#transform_df(**kwargs)
def my_compute_function(**inputs):
unioned_df = reduce(lambda df1, df2: df1.unionByName(df2), inputs.values())
return unioned_df
Regarding unions of different schemas, since Spark 3.1 one can use this:
df1.unionByName(df2, allowMissingColumns=True)

How can I reuse the dataframe and use alternative for iloc to run an iterative imputer in Azure databricks

I am running an iterative imputer in Jupyter Notebook to first mark the known incorrect values as "Nan" and then run the iterative imputer to impute the correct values to achieve required sharpness in the data. The sample code is given below:
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
import numpy as np
import pandas as p
idx = [761, 762, 763, 764]
cols = ['11','12','13','14']
def fit_imputer():
for i in range(len(idx)):
for col in cols:
dfClean.iloc[idx[i], col] = np.nan
print('Index = {} Col = {} Defiled Value is: {}'.format(idx[i], col, dfClean.iloc[idx[i], col]))
# Run Imputer for Individual row
tempOut = imp.fit_transform(dfClean)
print("Imputed Value = ",tempOut[idx[i],col] )
dfClean.iloc[idx[i], col] = tempOut[idx[i],col]
print("new dfClean Value = ",dfClean.iloc[idx[i], col])
origVal.append(dfClean_Orig.iloc[idx[i], col])
I get an error when I try to run this code on Azure Databricks using pyspark or scala. Because the dataframes in spark are immutable also I cannot use iloc as I have used it in pandas dataframe.
Is there a way or better way of implementing such imputation in databricks?

Matrix Multiplication A^T * A in PySpark

I asked a similar question yesterday - Matrix Multiplication between two RDD[Array[Double]] in Spark - however I've decided to shift to pyspark to do this. I've made some progress loading and reformatting the data - Pyspark map from RDD of strings to RDD of list of doubles - however the matrix multiplcation is difficult. Let me share my progress first:
matrix1.txt
1.2 3.4 2.3
2.3 1.1 1.5
3.3 1.8 4.5
5.3 2.2 4.5
9.3 8.1 0.3
4.5 4.3 2.1
it's difficult to share files, however this is what my matrix1.txt file looks like. It is a space-delimited text file including the values of a matrix. Next is the code:
# do the imports for pyspark and numpy
from pyspark import SparkConf, SparkContext
import numpy as np
# loadmatrix is a helper function used to read matrix1.txt and format
# from RDD of strings to RDD of list of floats
def loadmatrix(sc):
data = sc.textFile("matrix1.txt").map(lambda line: line.split(' ')).map(lambda line: [float(x) for x in line])
return(data)
# this is the function I am struggling with, it should take a line of the
# matrix (formatted as list of floats), compute an outer product with itself
def AtransposeA(line):
# pseudocode for this would be...
# outerprod = compute line * line^transpose
# return(outerprod)
# here is the main body of my file
if __name__ == "__main__":
# create the conf, sc objects, then use loadmatrix to read data
conf = SparkConf().setAppName('SVD').setMaster('local')
sc = SparkContext(conf = conf)
mymatrix = loadmatrix(sc)
# this is pseudocode for calling AtransposeA
ATA = mymatrix.map(lambda line: AtransposeA(line)).reduce(elementwise add all the outerproducts)
# the SVD of ATA is computed below
U, S, V = np.linalg.svd(ATA)
# ...
My approach is as follows - to do matrix multiplication A^T * A, I create a function that computes outer products of rows of A. The elementwise sum of all of the outerproducts is the product I want. I then call AtransposeA() in a map function, that way is it performed on each row of the matrix, and finally I use a reduce() to add the resulting matrices.
I'm struggling thinking about how the AtransposeA function should look. How can I do an outerproduct in pyspark like this? Thanks in advance for help!
First, consider why you want to use Spark for this. It sounds like all your data fits in memory, in which case you can use numpy and pandas in a very straight-forward way.
If your data isn't structured so that rows are independent, then it probably can't be parallelized by sending groups of rows to different nodes, which is the whole point of using Spark.
Having said that... here is some pyspark (2.1.1) code that I think does what you want.
# read the matrix file
df = spark.read.csv("matrix1.txt",sep=" ",inferSchema=True)
df.show()
+---+---+---+
|_c0|_c1|_c2|
+---+---+---+
|1.2|3.4|2.3|
|2.3|1.1|1.5|
|3.3|1.8|4.5|
|5.3|2.2|4.5|
|9.3|8.1|0.3|
|4.5|4.3|2.1|
+---+---+---+
# do the sum of the multiplication that we want, and get
# one data frame for each column
colDFs = []
for c2 in df.columns:
colDFs.append( df.select( [ F.sum(df[c1]*df[c2]).alias("op_{0}".format(i)) for i,c1 in enumerate(df.columns) ] ) )
# now union those separate data frames to build the "matrix"
mtxDF = reduce(lambda a,b: a.select(a.columns).union(b.select(a.columns)), colDFs )
mtxDF.show()
+------------------+------------------+------------------+
| op_0| op_1| op_2|
+------------------+------------------+------------------+
| 152.45|118.88999999999999| 57.15|
|118.88999999999999|104.94999999999999| 38.93|
| 57.15| 38.93|52.540000000000006|
+------------------+------------------+------------------+
This seems to be the same result that you get from numpy.
a = numpy.genfromtxt("matrix1.txt")
numpy.dot(a.T, a)
array([[ 152.45, 118.89, 57.15],
[ 118.89, 104.95, 38.93],
[ 57.15, 38.93, 52.54]])

Does h5py offer a neat simple way to create or use a group or dataset?

I'm fixing a python script using h5py. It contains code like this:
hdf = h5py.File(hdf5_filename, 'a')
...
g = hdf.create_group('foo')
g.create_dataset('bar', ...whatever...)
Sometimes this runs on a file which already has a group named 'foo', in which case I see "ValueError: Unable to create group (Name already exists)"
One way to fix this is to replace the one simple line with create_group with four lines, like this:
if 'foo' in hdf.keys():
g = hdf['foo']
else:
g = hdf.create_group['foo']
g.create_dataset(...etc...)
Is there a neater way to do this, maybe in only one line? Like how with files in the standard C library, 'a' mode will either append to an existing file, or create a file if it's not already there.
Same goes for datasets - I have
create_dataset('bar', ...)
but should check first:
if 'bar' in g.keys():
d = g['bar']
else:
d = g.create_dataset('bar')
My wish: to find h5py has methods named create_or_use_group() and create_or_use_dataset(). What actually exists?
Yes: require_group and require_dataset:
with h5py.File("/tmp/so_hdf5/test2.h5", 'w') as f:
a = f.create_dataset('a',data=np.random.random((10, 10)))
with h5py.File("/tmp/so_hdf5/test2.h5", 'r+') as f:
a = f.require_dataset('a', shape=(10, 10), dtype='float64')
d = f.require_dataset('d', shape=(20, 20), dtype='float64')
g = f.require_group('foo')
print(a)
print(d)
print(g)
Note that you do need to know the shape and dtype of the dataset, otherwise require_dataset throws a TypeError. In that case, you could do something like:
try:
a = f.require_dataset('a', shape=(10, 10), dtype='float64')
except TypeError:
a = f['a']
If you don't already know the shape and dtype, I don't think there's much advantage to require_dataset over using try ... except ...