inferSchema in spark csv package - pyspark

i am trying to read a csv file as a spark df by enabling inferSchema, but then am unable to get the fv_df.columns. below is the error message
>>> fv_df = spark.read.option("header", "true").option("delimiter", "\t").csv('/home/h212957/FacilityView/datapoints_FV.csv', inferSchema=True)
>>> fv_df.columns
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/h212957/spark/python/pyspark/sql/dataframe.py", line 687, in columns
return [f.name for f in self.schema.fields]
File "/home/h212957/spark/python/pyspark/sql/dataframe.py", line 227, in schema
self._schema = _parse_datatype_json_string(self._jdf.schema().json())
File "/home/h212957/spark/python/pyspark/sql/types.py", line 894, in _parse_datatype_json_string
return _parse_datatype_json_value(json.loads(json_string))
File "/home/h212957/spark/python/pyspark/sql/types.py", line 911, in _parse_datatype_json_value
return _all_complex_types[tpe].fromJson(json_value)
File "/home/h212957/spark/python/pyspark/sql/types.py", line 562, in fromJson
return StructType([StructField.fromJson(f) for f in json["fields"]])
File "/home/h212957/spark/python/pyspark/sql/types.py", line 428, in fromJson
_parse_datatype_json_value(json["type"]),
File "/home/h212957/spark/python/pyspark/sql/types.py", line 907, in _parse_datatype_json_value
raise ValueError("Could not parse datatype: %s" % json_value)
ValueError: Could not parse datatype: decimal(7,-31)
However If i don't infer the Schema than I am able to fetch the columns and do further operations. I am unable to get as why this is working in this way. Can anyone please explain me.

I suggest you use the function '.load' rather than '.csv', something like this:
data = sc.read.load(path_to_file,
format='com.databricks.spark.csv',
header='true',
inferSchema='true').cache()
Of you course you can add more options. Then you can simply get you want:
data.columns
Another way of doing this (to get the columns) is to use it this way:
data = sc.textFile(path_to_file)
And to get the headers (columns) just use
data.first()
Looks like you are trying to get your schema from your csv file without opening it! The above should help you to get them and hence manipulate whatever you like.
Note: to use '.columns' your 'sc' should be configured as:
spark = SparkSession.builder \
.master("yarn") \
.appName("experiment-airbnb") \
.enableHiveSupport() \
.getOrCreate()
sc = SQLContext(spark)
Good luck!

Please try the below code and this infers the schema along with header
from pyspark.sql import SparkSession
spark=SparkSession.builder.appName('operation').getOrCreate()
df=spark.read.csv("C:/LEARNING//Spark_DataFrames/stock.csv ",inferSchema=True, header=True)
df.show()

It would be good if you can provide some sample data next time. How should we know how your csv looks like. Concerning your question, it looks like that your csv column is not a decimal all the time. InferSchema takes the first row and assign a datatype, in your case, it is a DecimalType but then in the second row you might have a text so that the error would occur.
If you don't infer the schema then, of course, it would work since everything will be cast as a StringType.

Related

Pyspark - Code to calculate file hash/checksum not working

I have the below pyspark code to calculate the SHA1 hash of each file in a folder. I'm using spark.sparkContext.binaryFiles to get an RDD of pairs where the key is the file name and the value is a file-like object, on which I'm calculating the hash in a map function rdd.mapValues(map_hash_file). However, I'm getting the below error at the second-last line, which I don't understand - how can this be fixed please? Thanks
Error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 66.0 failed 4 times, most recent failure: Lost task 0.3 in stage 66.0
Code:
#Function to calulcate hash-value/checksum of a file
def map_hash_file(row):
file_name = row[0]
file_contents = row[1]
sha1_hash = hashlib.sha1()
sha1_hash.update(file_contents.encode('utf-8'))
return file_name, sha1_hash.hexdigest()
rdd = spark.sparkContext.binaryFiles('/mnt/workspace/Test_Folder', minPartitions=None)
#As a check, print the list of files collected in the RDD
dataColl=rdd.collect()
for row in dataColl:
print(row[0])
#Apply the function to calcuate hash of each file and store the results
hash_values = rdd.mapValues(map_hash_file)
#Store each file name and it's hash value in a dataframe to later export as a CSV
df = spark.createDataFrame(data=hash_values)
display(df)
You will get your expected result if you do the following:
Change file_contents.encode('utf-8') to file_contents. file_contents is already a of type bytes
Change rdd.mapValues(map_hash_file) to rdd.map(map_hash_file). The function map_hash_file expects a tuple.
Also consider:
Adding an import hashlib
Not collecting the content of all files to the driver - you risk consuming all the memory at the driver.
With the above changes, your code should look something like this:
import hashlib
#Function to calulcate hash-value/checksum of a file
def map_hash_file(row):
file_name = row[0]
file_contents = row[1]
sha1_hash = hashlib.sha1()
sha1_hash.update(file_contents)
return file_name, sha1_hash.hexdigest()
rdd = spark.sparkContext.binaryFiles('/mnt/workspace/Test_Folder', minPartitions=None)
#Apply the function to calcuate hash of each file and store the results
hash_values = rdd.map(map_hash_file)
#Store each file name and it's hash value in a dataframe to later export as a CSV
df = spark.createDataFrame(data=hash_values)
display(df)

How to read the last line in CSV line in GATLING

I use feeders from a csv file, and i want to read always the last line in this file:
val actors= csv("./src/test/resources/Data/ksp-acteurs.csv").circular
.feed(actors)
.exec(http("K_actors")
.get("https://URL/ksp-acteurs/${Col1}") //need to extract the last value in this column
.header("Authorization","Bearer ${jwtoken}")
.check(status is 200))
You could just do something like this:
val feeder = csv("./src/test/resources/Data/ksp-acteurs.csv")
val lastLine = feeder.readRecords.last.values
but the important thing is that you call readRecords before you call circular, because in this case readRecord will not be able to find last line and test will fail with out-of-memory exception.

How to save pyspark data frame in a single csv file

This is in continuation of this how to save dataframe into csv pyspark thread.
I'm trying to save my pyspark data frame df in my pyspark 3.0.1. So I wrote
df.coalesce(1).write.csv('mypath/df.csv)
But after executing this, I'm seeing a folder named df.csv in mypath which contains 4 following files
1._committed_..
2._started_...
3._Success
4. part-00000-.. .csv
Can you suggest to me how do I save all data in df.csv?
You can use .coalesce(1) to save the file in just 1 csv partition, then rename this csv and move it to the desired folder.
Here is a function that does that:
df: Your df
fileName: Name you want to for the csv file
filePath: Folder where you want to save to
def export_csv(df, fileName, filePath):
filePathDestTemp = filePath + ".dir/"
df\
.coalesce(1)\
.write\
.save(filePathDestTemp)
listFiles = dbutils.fs.ls(filePathDestTemp)
for subFiles in listFiles:
if subFiles.name[-4:] == ".csv":
dbutils.fs.cp (filePathDestTemp + subFiles.name, filePath + fileName+ '.csv')
dbutils.fs.rm(filePathDestTemp, recurse=True)
If you want to get one file named df.csv as output, you can first write into a temporary folder, then move the part file generated by Spark and rename it.
These steps can be done using Hadoop FileSystem API available via JVM gateway :
temp_path = "mypath/__temp"
target_path = "mypath/df.csv"
df.coalesce(1).write.mode("overwrite").csv(temp_path)
Path = sc._gateway.jvm.org.apache.hadoop.fs.Path
# get the part file generated by spark write
fs = Path(temp_path).getFileSystem(sc._jsc.hadoopConfiguration())
csv_part_file = fs.globStatus(Path(temp_path + "/part*"))[0].getPath()
# move and rename the file
fs.rename(csv_part_file, Path(target_path))
fs.delete(Path(temp_path), True)

Is there a way to save each HDF5 data set as a .csv column?

I'm struggling with a H5 file to extract and save data as a multi column csv. as shown in the picture the structure of h5 file consisted of main groups (Genotypes, Positions, and taxa). The main group, Genotypes contains more than 1500 subgroups (genotype partial names) and each subgroup contains sub-sun groups (complete name of genotypes).There are about 1 million data sets (named calls) -each one is laid in one sub-sub group - which i need them to be written - each one - in a separate column. The problem is that when i use h5py (group.get function) i have to use the path of any calls. I extracted the all paths containing "calls" at the end of path but I cant reach all
1 million calls to get them into a csv file.
could anybody help me to extracts "calls" which are 8bit integer i\as a separate columns in a csv file.
By running the code in first answer I get this error:
Traceback (most recent call last): File "path/file.py", line 32,
in
h5r.visititems(dump_calls2csv) #NOTE: function name is NOT a string! File "path/file.py", line 565, in visititems
return h5o.visit(self.id, proxy) File "h5py_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File
"h5py_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5o.pyx", line 355, in h5py.h5o.visit File
"h5py\defs.pyx", line 1641, in h5py.defs.H5Ovisit_by_name File
"h5py\h5o.pyx", line 302, in h5py.h5o.cb_obj_simple File
"path/file.py", line 564, in proxy
return func(name, self[name]) File "path/file.py", line 10, in dump_calls2csv
np.savetxt(csvfname, arr, fmt='%5d', delimiter=',') File "<array_function internals>", line 6, in savetxt File
"path/file.py", line 1377, in savetxt
open(fname, 'wt').close() OSError: [Errno 22] Invalid argument: 'Genotypes_ArgentineFlintyComposite-C(1)-37-B-B-B2-1-B25-B2-B?-1-B:100000977_calls.csv
16-May-2020 Update:
Added a second example that reads and exports using Pytables (aka
tables) using .walk_nodes(). I prefer this method over h5py
.visititems()
For clarity, I separated the code that creates the example file from the
2 examples that read and export the CSV data.
Enclosed below are 2 simple examples that show how to recursively loop on all top level objects. For completeness, the code to create the test file is at the end of this post.
Example 1: with h5py
This example uses the .visititems() method with a callable function (dump_calls2csv).
Summary of this procedure:
1) Checks for dataset objects with calls in the name.
2) When it finds a matching object it does the following:
a) reads the data into a Numpy array,
b) creates a unique file name (using string substitution on the H5 group/dataset path name to insure uniqueness),
c) writes the data to the file with numpy.savetxt().
import h5py
import numpy as np
def dump_calls2csv(name, node):
if isinstance(node, h5py.Dataset) and 'calls' in node.name :
print ('visiting object:', node.name, ', exporting data to CSV')
csvfname = node.name[1:].replace('/','_') +'.csv'
arr = node[:]
np.savetxt(csvfname, arr, fmt='%5d', delimiter=',')
##########################
with h5py.File('SO_61725716.h5', 'r') as h5r :
h5r.visititems(dump_calls2csv) #NOTE: function name is NOT a string!
If you want to get fancy, you can replace arr in np.savetxt() with node[:].
Also, you you want headers in your CSV, extract and reference the dtype field names from the dataset (I did not create any in this example).
Example 2: with PyTables (tables)
This example uses the .walk_nodes() method with a filter: classname='Leaf'. In PyTables, a leaf can be any of the storage classes (Arrays and Table).
The procedure is similar to the method above. walk_nodes() simplifies the process to find datasets and does NOT require a call to a separate function.
import tables as tb
import numpy as np
with tb.File('SO_61725716.h5', 'r') as h5r :
for node in h5r.walk_nodes('/',classname='Leaf') :
print ('visiting object:', node._v_pathname, 'export data to CSV')
csvfname = node._v_pathname[1:].replace('/','_') +'.csv'
np.savetxt(csvfname, node.read(), fmt='%d', delimiter=',')
For completeness, use the code below to create the test file used in the examples.
import h5py
import numpy as np
ngrps = 2
nsgrps = 3
nds = 4
nrows = 10
ncols = 2
with h5py.File('SO_61725716.h5', 'w') as h5w :
for gcnt in range(ngrps):
grp1 = h5w.create_group('Group_'+str(gcnt))
for scnt in range(nsgrps):
grp2 = grp1.create_group('SubGroup_'+str(scnt))
for dcnt in range(nds):
i_arr = np.random.randint(1,100, (nrows,ncols) )
ds = grp2.create_dataset('calls_'+str(dcnt), data=i_arr)

pyspark : Categorical variables preparation for kmeans

I know Kmeans is not a good selection to be applied to categorical data, but we dont have much options in spark 1.4 for clustering categorical data.
Regardless of above issue. I'm getting errors in my below code.
I read my table from hive, use onehotencoder in a pipeline and then send the code into Kmeans.
Im getting an error when running this code.
Could the error be in datatype fed to Kmeans? doen is expect numpay Array data? if so How can I transfer my indexed data to numpy array!?!?
All comments are aporeciated and thanks for your help!
The error Im getting:
Traceback (most recent call last):
File "/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip/pyspark /daemon.py", line 157, in manager
File "/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip/pyspark/daemon.py",
line 61, in worker
File "/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip/pyspark/worker.py",
line 136, in main
if read_int(infile) == SpecialLengths.END_OF_STREAM: File "/usr/hdp/2.3.2.0-2950/spark/python/lib/pyspark.zip/pyspark/serializers.py",
line 544, in read_int
raise EOFError EOFError File "", line 1
Traceback (most recent call last):
My code:
#aline will be passed in from another rdd
aline=["xxx","yyy"]
# get data from Hive table & select the column & convert back to Rdd
rddRes2=hCtx.sql("select XXX, YYY from table1 where xxx <> ''")
rdd3=rddRes2.rdd
#fill the NA values with "none"
Rdd4=rdd3.map(lambda line: [x if len(x) else 'none' for x in line])
# convert it back to Df
DataDF=Rdd4.toDF(aline)
# Indexers encode strings with doubles
string_indexers=[
StringIndexer(inputCol=x,outputCol="idx_{0}".format(x))
for x in DataDF.columns if x not in '' ]
encoders=[
OneHotEncoder(inputCol="idx_{0}".format(x),outputCol="enc_{0}".format(x))
for x in DataDF.columns if x not in ''
]
# Assemble multiple columns into a single vector
assembler=VectorAssembler(
inputCols=["enc_{0}".format(x) for x in DataDF.columns if x not in ''],
outputCol="features")
pipeline= Pipeline(stages=string_indexers+encoders+[assembler])
model=pipeline.fit(DataDF)
indexed=model.transform(DataDF)
labeled_points=indexed.select("features").map(lambda row: LabeledPoint(row.features))
# Build the model (cluster the data)
clusters = KMeans.train(labeled_points, 3, maxIterations=10,runs=10, initializationMode="random")
I guess the correction would not solve the problem.
you can convert dense vectors to Array by uising XXX.toarray()