How to perform cross validation without using paramgrid Builder in pyspark? - pyspark

I want to do a sklearn kind of cross validation in pyspark without using ParamGrid Builder.
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(regParam=0.1,elasticNet=0.2,maxIter=100)
crossval = CrossValidator(estimator=lr,
evaluator=BinaryClassificationEvaluator(),
numFolds=2)
Is it possible to perform the cross validation in this way without using paramGrid Builder? My usecase is I want to pass the parameters into the linear regression class as arguments but not as paramGrid object.

One easy solution would be to only provide the parameters you want to use in your ParamGrid:
paramGrid = ParamGridBuilder() \
.addGrid(lr.regParam, [0.1]) \
.addGrid(lr.elasticNet, [0.2]) \
.addGrid(lr.maxIter, [100])
.build()
crossval = CrossValidator(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=BinaryClassificationEvaluator(),
numFolds=2)
You can always code your own version of K-fold, splitting your dataset in K parts with :
fold1, fold2 = df.randomSplit([0.5,0.5])
folds = [fold1, fold2]
res = []
for fold in folds:
train, test = fold.randomSplit([0.80,0.20])
model.train(train)
res.append(model.evaluate(test))
do_what_you_want(res)

Related

How to union multiple dynamic inputs in Palantir Foundry?

I want to Union multiple datasets in Palantir Foundry, the name of the datasets are dynamic so I would not be able to give the dataset names in transform_df() statically. Is there a way I can dynamically take multiple inputs into transform_df and union all of those dataframes?
I tried looping over the datasets like:
li = ['dataset1_path', 'dataset2_path']
union_df = None
for p in li:
#transforms_df(
my_input = Input(p),
Output(p+"_output")
)
def my_compute_function(my_input):
return my_input
if union_df is None:
union_df = my_compute_function
else:
union_df = union_df.union(my_compute_function)
But, this doesn't generate the unioned output.
This should be able to work for you with some changes, this is an example of dynamic dataset with json files, your situation would maybe be only a little different. Here is a generalized way you could be doing dynamic json input datasets that should be adaptable to any type of dynamic input file type or internal to foundry dataset that you can specify. This generic example is working on a set of json files uploaded to a dataset node in the platform. This should be fully dynamic. Doing a union after this should be a simple matter.
There's some bonus logging going on here as well.
Hope this helps
from transforms.api import Input, Output, transform
from pyspark.sql import functions as F
import json
import logging
def transform_generator():
transforms = []
transf_dict = {## enter your dynamic mappings here ##}
for value in transf_dict:
#transform(
out=Output(' path to your output here '.format(val=value)),
inpt=Input(" path to input here ".format(val=value)),
)
def update_set(ctx, inpt, out):
spark = ctx.spark_session
sc = spark.sparkContext
filesystem = list(inpt.filesystem().ls())
file_dates = []
for files in filesystem:
with inpt.filesystem().open(files.path) as fi:
data = json.load(fi)
file_dates.append(data)
logging.info('info logs:')
logging.info(file_dates)
json_object = json.dumps(file_dates)
df_2 = spark.read.option("multiline", "true").json(sc.parallelize([json_object]))
df_2 = df_2.withColumn('upload_date', F.current_date())
df_2.drop_duplicates()
out.write_dataframe(df_2)
transforms.append(update_logs)
return transforms
TRANSFORMS = transform_generator()
So this question breaks down in two questions.
How to handle transforms with programatic input paths
To handle transforms with programatic inputs, it is important to remember two things:
1st - Transforms will determine your inputs and outputs at CI time. Which means that you can have python code that generates transforms, but you cannot read paths from a dataset, they need to be hardcoded into your python code that generates the transform.
2nd - Your transforms will be created once, during the CI execution. Meaning that you can't have an increment or special logic to generate different paths whenever the dataset builds.
With these two premises, like in your example or #jeremy-david-gamet 's (ty for the reply, gave you a +1) you can have python code that generates your paths at CI time.
dataset_paths = ['dataset1_path', 'dataset2_path']
for path in dataset_paths:
#transforms_df(
my_input = Input(path),
Output(f"{path}_output")
)
def my_compute_function(my_input):
return my_input
However to union them you'll need a second transform to execute the union, you'll need to pass multiple inputs, so you can use *args or **kwargs for this:
dataset_paths = ['dataset1_path', 'dataset2_path']
all_args = [Input(path) for path in dataset_paths]
all_args.append(Output("path/to/unioned_dataset"))
#transforms_df(*all_args)
def my_compute_function(*args):
input_dfs = []
for arg in args:
# there are other arguments like ctx in the args list, so we need to check for type. You can also use kwargs for more determinism.
if isinstance(arg, pyspark.sql.DataFrame):
input_dfs.append(arg)
# now that you have your dfs in a list you can union them
# Note I didn't test this code, but it should be something like this
...
How to union datasets with different schemas.
For this part there are plenty of Q&A out there on how to union different dataframes in spark. Here is a short code example copied from https://stackoverflow.com/a/55461824/26004
from pyspark.sql import SparkSession, HiveContext
from pyspark.sql.functions import lit
from pyspark.sql import Row
def customUnion(df1, df2):
cols1 = df1.columns
cols2 = df2.columns
total_cols = sorted(cols1 + list(set(cols2) - set(cols1)))
def expr(mycols, allcols):
def processCols(colname):
if colname in mycols:
return colname
else:
return lit(None).alias(colname)
cols = map(processCols, allcols)
return list(cols)
appended = df1.select(expr(cols1, total_cols)).union(df2.select(expr(cols2, total_cols)))
return appended
Since inputs and outputs are determined at CI time, we cannot form true dynamic inputs. We will have to somehow point to specific datasets in the code. Assuming the paths of datasets share the same root, the following seems to require minimum maintenance:
from transforms.api import transform_df, Input, Output
from functools import reduce
datasets = [
'dataset1',
'dataset2',
'dataset3',
]
inputs = {f'inp{i}': Input(f'input/folder/path/{x}') for i, x in enumerate(datasets)}
kwargs = {
**{'output': Output('output/folder/path/unioned_dataset')},
**inputs
}
#transform_df(**kwargs)
def my_compute_function(**inputs):
unioned_df = reduce(lambda df1, df2: df1.unionByName(df2), inputs.values())
return unioned_df
Regarding unions of different schemas, since Spark 3.1 one can use this:
df1.unionByName(df2, allowMissingColumns=True)

How can I reuse the dataframe and use alternative for iloc to run an iterative imputer in Azure databricks

I am running an iterative imputer in Jupyter Notebook to first mark the known incorrect values as "Nan" and then run the iterative imputer to impute the correct values to achieve required sharpness in the data. The sample code is given below:
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
import numpy as np
import pandas as p
idx = [761, 762, 763, 764]
cols = ['11','12','13','14']
def fit_imputer():
for i in range(len(idx)):
for col in cols:
dfClean.iloc[idx[i], col] = np.nan
print('Index = {} Col = {} Defiled Value is: {}'.format(idx[i], col, dfClean.iloc[idx[i], col]))
# Run Imputer for Individual row
tempOut = imp.fit_transform(dfClean)
print("Imputed Value = ",tempOut[idx[i],col] )
dfClean.iloc[idx[i], col] = tempOut[idx[i],col]
print("new dfClean Value = ",dfClean.iloc[idx[i], col])
origVal.append(dfClean_Orig.iloc[idx[i], col])
I get an error when I try to run this code on Azure Databricks using pyspark or scala. Because the dataframes in spark are immutable also I cannot use iloc as I have used it in pandas dataframe.
Is there a way or better way of implementing such imputation in databricks?

I have an issue with regex extract with multiple matches

I am trying to extract 60 ML and 0.5 ML from the string "60 ML of paracetomol and 0.5 ML of XYZ" . This string is part of a column X in spark dataframe. Though I am able to test my regex code to extract 60 ML and 0.5 ML in regex validator, I am not able to extract it using regexp_extract as it targets only 1st matches. Hence I am getting only 60 ML.
Can you suggest me the best way of doing it using UDF ?
Here is how you can do it with a python UDF:
from pyspark.sql.types import *
from pyspark.sql.functions import *
import re
data = [('60 ML of paracetomol and 0.5 ML of XYZ',)]
df = sc.parallelize(data).toDF('str:string')
# Define the function you want to return
def extract(s)
all_matches = re.findall(r'\d+(?:.\d+)? ML', s)
return all_matches
# Create the UDF, note that you need to declare the return schema matching the returned type
extract_udf = udf(extract, ArrayType(StringType()))
# Apply it
df2 = df.withColumn('extracted', extract_udf('str'))
Python UDFs take a significant performance hit over native DataFrame operations. After thinking about it a little more, here is another way to do it without using a UDF. The general idea is replace all the text that isn't what you want with commas, then split on comma to create your array of final values. If you only want the numbers you can update the regex's to take 'ML' out of the capture group.
pattern = r'\d+(?:\.\d+)? ML'
split_pattern = r'.*?({pattern})'.format(pattern=pattern)
end_pattern = r'(.*{pattern}).*?$'.format(pattern=pattern)
df2 = df.withColumn('a', regexp_replace('str', split_pattern, '$1,'))
df3 = df2.withColumn('a', regexp_replace('a', end_pattern, '$1'))
df4 = df3.withColumn('a', split('a', r','))

pyspark - how to cross validate several ML algorithms

i want to be able to choose the best fit algorithm with it's best params .
how can i do it in one go , without creating few pipelines for each algorithm , and without doing checks in the cross validation for params that are not relevant for specific algorithm ?
i.e i want to check how logistic regression perform against randomforest.
my code is :
lr = LogisticRegression().setFamily("multinomial")
# Chain indexer and tree in a Pipeline
pipeline = Pipeline(stages=[labelIndexer,labelIndexer2, assembler, lr , labelconverter])
paramGrid = ParamGridBuilder() \
.addGrid(lr.regParam, [0.1, 0.3, 0.01]) \
.addGrid(lr.elasticNetParam, [0.1, 0.8, 0.01]) \
.addGrid(lr.maxIter, [10, 20, 25]) \
.build()
crossval = CrossValidator(estimator=pipeline,
estimatorParamMaps=paramGrid,
evaluator=RegressionEvaluator(),
numFolds=2) # use 3+ folds in practice
# Train model. This also runs the indexer.
model = crossval.fit(trainingData)
I've written a quick and dirty workaround in Python/Pyspark. It is a bit primitive (it doesn't have a corresponding Scala class) and I think it lacks the save/load capabilities but it might be a starting point for your case. Eventually it might become a new functionality in Spark it would be nice to have.
The idea is to have a special pipeline stage that acts as a switch between different objects, and maintains a dictionary to refer to them with strings. The user can enable one or another by name. They can be either Estimators, Transformers or mix both - the user is responsible for keeping the coherence in the pipeline (doing things that make sense, at her own risk). The parameter with the name of the enabled stage can be included in the grid to be cross-validated.
from pyspark.ml.wrapper import JavaEstimator
from pyspark.ml.base import Estimator, Transformer, Param, Params, TypeConverters
class PipelineStageChooser(JavaEstimator):
selectedStage = Param(Params._dummy(), "selectedStage", "key of the selected stage in the dict",
typeConverter=TypeConverters.toString)
stagesDict = None
_paramMap = {}
def __init__(self, stagesDict, selectedStage):
super(PipelineStageChooser, self).__init__()
self.stagesDict = stagesDict
if selectedStage not in self.stagesDict.keys():
raise KeyError("selected stage {0} not found in stagesDict".format(selectedStage))
if isinstance(self.stagesDict[selectedStage], Transformer):
self.fittedSelectedStage = self.stagesDict[selectedStage]
for stage in stagesDict.values():
if not (isinstance(stage, Estimator) or isinstance(stage, Transformer)):
raise TypeError("Cannot recognize a pipeline stage of type %s." % type(stage))
self._set(selectedStage=selectedStage)
self._java_obj = None
def fit(self, dataset, params=None):
selectedStage_str = self.getOrDefault(self.selectedStage)
if isinstance(self.stagesDict[selectedStage_str], Estimator):
return self.stagesDict[selectedStage_str].fit(dataset, params = params)
elif isinstance(self.stagesDict[selectedStage_str], Transformer):
return self.stagesDict[selectedStage_str]
Use example:
count_vectorizer = CountVectorizer() # set params
hashing_tf = HashingTF() # set params
chooser = PipelineStageChooser(stagesDict={"count_vectorizer": count_vectorizer,
"hashing_tf": hashing_tf},
selectedStage="count_vectorizer")
pipeline = Pipeline(stages = [chooser])
# Test which among CountVectorizer or HashingTF works better to create features
# Could be used as well to decide between different ML algorithms
paramGrid = ParamGridBuilder() \
.addGrid(chooser.selectedStage, ["count_vectorizer", "hashing_tf"])\
.build()

PySpark: Error "Cannot pickle standard input" on function map

I'm trying to learn to use Pyspark.
I'm usin spark-2.2.0- with Python3
I'm in front of a problem now and I can't find where it came from.
My project is to adapt a algorithm wrote by data-scientist to be distributed. The code below it's what I have to use to extract the features from images and I have to adapt it to extract features whith pyspark.
import json
import sys
# Dependencies can be installed by running:
# pip install keras tensorflow h5py pillow
# Run script as:
# ./extract-features.py images/*.jpg
from keras.applications.vgg16 import VGG16
from keras.models import Model
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
def main():
# Load model VGG16 as described in https://arxiv.org/abs/1409.1556
# This is going to take some time...
base_model = VGG16(weights='imagenet')
# Model will produce the output of the 'fc2'layer which is the penultimate neural network layer
# (see the paper above for mode details)
model = Model(input=base_model.input, output=base_model.get_layer('fc2').output)
# For each image, extract the representation
for image_path in sys.argv[1:]:
features = extract_features(model, image_path)
with open(image_path + ".json", "w") as out:
json.dump(features, out)
def extract_features(model, image_path):
img = image.load_img(image_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = model.predict(x)
return features.tolist()[0]
if __name__ == "__main__":
main()
I have written the begining of the Code:
rdd = sc.binaryFiles(PathImages)
base_model = VGG16(weights='imagenet')
model = Model(input=base_model.input, output=base_model.get_layer('fc2').output)
rdd2 = rdd.map(lambda x : (x[0], extract_features(model, x[0][5:])))
rdd2.collect()[0]
when I try to extract the feature. There is an error.
~/Code/spark-2.2.0-bin-hadoop2.7/python/pyspark/cloudpickle.py in
save_file(self, obj)
623 return self.save_reduce(getattr, (sys,'stderr'), obj=obj)
624 if obj is sys.stdin:
--> 625 raise pickle.PicklingError("Cannot pickle standard input")
626 if hasattr(obj, 'isatty') and obj.isatty():
627 raise pickle.PicklingError("Cannot pickle files that map to tty objects")
PicklingError: Cannot pickle standard input
I try multiple thing and here is my first result. I know that the error come from the line below in the method extract_features:
features = model.predict(x)
and when I try to run this line out of a map function or pyspark, this work fine.
I think the problem come from the object "model" and his serialisation whith pyspark.
Maybe I don't use a good way to distribute this with pyspark and if you have any clew to help me, I will take them.
Thanks in advance.