NullpointException when reading file with RowCsvInputFormat in flink - scala

I am a beginner on Flink streaming.
When reading a file with RowCsvInputFormat, the code that Kryo serializer creates Row does not work properly.
The code is below.
val readLocalCsvFile = new RowCsvInputFormat(
new Path("flink-test/000000_1"),
Array(Types.STRING, Types.STRING, Types.STRING),
"\n",
","
)
val read = env.readFile(
readLocalCsvFile,
"flink-test/000000_1",
FileProcessingMode.PROCESS_CONTINUOUSLY,
1000000)
read.print()
env.execute("test")
The contents of the file 000000_1 are as follows.
aa,bb,cc
aaa,bbb,ccc
As a result of debugging, I get the divided values ​​of aa, bb, and cc well. But when I put those values ​​into Row's fields one by one, a nullpointexception is raised because fields are null.
The image below shows that the fields of the Row are null.
enter image description here
The code that creates a Row when the above code is executed is as follows. KryoSerializer generates the row.
val kryo = new EmptyFlinkScalaKryoInstantiator().newKryo
val Row = kryo.newInstance(classOf[Row])
The output error is as follows.
java.lang.NullPointerException
at org.apache.flink.types.Row.setField(Row.java:140)
at org.apache.flink.api.java.io.RowCsvInputFormat.fillRecord(RowCsvInputFormat.java:162)
at org.apache.flink.api.java.io.RowCsvInputFormat.fillRecord(RowCsvInputFormat.java:33)
at org.apache.flink.api.java.io.CsvInputFormat.readRecord(CsvInputFormat.java:113)
at org.apache.flink.api.common.io.DelimitedInputFormat.nextRecord(DelimitedInputFormat.java:551)
at org.apache.flink.api.java.io.CsvInputFormat.nextRecord(CsvInputFormat.java:80)
at org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator.readAndCollectRecord(ContinuousFileReaderOperator.java:387)
at

Maybe you can post the complete code.
Judging from the task error report, it may be because the number of fields does not match

Related

How to use filter in dynamically in scala?

I have the raw of line of logs file about 1TB. As below.
Test X1 SET WARN CATALOG MAP1,MAP2
INFO X2 SET WARN CATALOG MAPX,MAP2,MAP3
I read the logs file using spark scala scala and make the rdd of logs file.
I need to filter only those line which contains
1.SET
2.INFO
3. CATALOG
I write the filter like that
Val filterRdd = rdd.filter(f =>f.contains("SET")).filter(f => f.contains("INFO")).filter(f =>f.contains("CATALOG"))
can we do the same if these parameter are assign to list. and based on that we can filter dynamically not writing to much of line ; here in example i take only three restriction but in real it goes to upto 15 restriction keywords. can we do it dynamically.
Something like this could work when you require all words to appear in a line:
val words = Seq("SET", "INFO", "CATALOG")
val filterRdd = rdd.filter(f => words.forall(w => f.contains(w)))
and if you want any:
val filterRdd = rdd.filter(f => words.exists(w => f.contains(w)))

Converting DataFrame String column containing missing values to Date in Julia

I'm trying to convert a DataFrame String column to Date format in Julia, but if the column contains missing values an error is produced:
ERROR: MethodError: no method matching Int64(::Missing)
The code I've tried to run (which works for columns with no missing data) is:
df_pp[:tod] = Date.(df_pp[:tod], DateFormat("d/m/y"));
Other lines of code I have tried are:
df_pp[:tod] = Date.(passmissing(df_pp[:tod]), DateFormat("d/m/y"));
df_pp[.!ismissing.(df_pp[:tod]), :tod] = Date.(df_pp[:tod], DateFormat("d/m/y"));
The code relates to a column named tod in a data frame named df_pp. Both the DataFrames & Dates packages have been loaded prior to attempting this.
The passmissing way is
df_pp.tod = passmissing(x->Date(x, DateFormat("d/m/y"))).(df_pp.tod)
What happens here is this: passmissing takes a function, and returns a new function that handles missings (by returning missing). Inside the bracket, in x->Date(x, DateFormat("d/m/y")) I define a new, anonymous function, that calls the Date function with the appropriate DateFormat.
Finally, I use the function returned by passmissing immediately on df_pp.tod, using a . to broadcast along the column.
It's easier to see the syntax if I split it up:
myDate(x) = Date(x, DateFormat("d/m/y"))
Date_accepting_missing = passmissing(myDate)
df_pp[:tod] = Date_accepting_missing.(df_pp[:tod])

Null Pointer exception in Map function of Spark program

I am new to Scala, while running one spark program I am getting null Pointer exception. Can anyone point me how to solve this.
val data = spark.read.csv("C:\\File\\Path.csv").rdd
val result = data.map{ line => {
val population = line.getString(10).replaceAll(",","")
var popNum = 0L
if (population.length()> 0)
popNum = Long.parseLong(population)
(popNum, line.getString(0))
}}
.sortByKey(false)
.first()
//spark.sparkContext.parallelize(Seq(result)).saveAsTextFile(args(1))
println("The result is: "+ result)
spark.stop
Error message :
Caused by: java.lang.NullPointerException
at com.nfs.WBI.KPI01.HighestUrbanPopulation$$anonfun$1.apply(HighestUrbanPopulation.scala:23)
at com.nfs.WBI.KPI01.HighestUrbanPopulation$$anonfun$1.apply(HighestUrbanPopulation.scala:22)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:410)
I guess that in your input data there is at least one row that does not contain a value in column 10, so that line.getString(10) returns null. When calling replaceAll(",","") on that result, the NullPointerException occurs.
A quick fix would be to wrap the the call to getString in an Option:
val population = Option(line.getString(10)).getOrElse("")
This returns the value of column 10 or an empty string if the column is null.
Some care must be taken when parsing the long. Unless you are absolutely sure that the column always contains a number, a NumberFormatException could be thrown.
In general, you should check the inferSchema option of the CSV reader of Spark and try to avoid parsing the data yourself.
In addition to the parsing issues mentioned elsewhere in this post, it seems that you have numbers separated by commas in your data. This is going to complicate csv parsing and cause potentially undesirable behavior. You may have to sanitize the data even before reading in spark.
Also if you're using Spark 2.0, it's best to use Dataframes/Datasets along with GroupBy constructs. See this post - How to deal with null values in spark reduceByKey function?. I suspect you have null values in your sort key as well.

Scala Spark loop goes through without any error, but does not produce an output

I have a file in HDFS containing paths of various other files. Here is the file called file1:
path/of/HDFS/fileA
path/of/HDFS/fileB
path/of/HDFS/fileC
.
.
.
I am using a for loop in Scala Spark as follows to read each line of the above file and process it in another function:
val lines=Source.fromFile("path/to/file1.txt").getLines.toList
for(i<-lines){
i.toString()
val firstLines=sc.hadoopFile(i,classOf[TextInputFormat],classOf[LongWritable],classOf[Text]).flatMap {
case (k, v) => if (k.get == 0) Seq(v.toString) else Seq.empty[String]
}
}
when I run the above loop, it runs through without returning any errors and I get the Scala prompt in a new line: scala>
However, when I try to see a few lines of output which should be stored in firstLines, it does not work:
scala> firstLines
<console>:38: error: not found: value firstLines
firstLine
^
What is the problem in the above loop that is not producing the output, however running through without any errors?
Additional info
The function hadoopFile accepts a String path name as its first parameter. That is why I am trying to pass each line of file1 (each line is a path name) as a String in the first parameter i. The flatMap functionality is taking the first line of the file that has been passed to hadoopFile and stores that alone and dumps all the other lines. So the desired output (firstLines) should be the first line of all the files that are being passed to hadoopFile through their path names (i).
I tried running the function for just a single file, without a looop, and that produces the output:
val firstLines=sc.hadoopFile("path/of/HDFS/fileA",classOf[TextInputFormat],classOf[LongWritable],classOf[Text]).flatMap {
case (k, v) => if (k.get == 0) Seq(v.toString) else Seq.empty[String]
}
scala> firstLines.take(3)
res27: Array[String] = Array(<?xml version="1.0" encoding="utf-8"?>)
fileA is an XML file, so you can see the resulting first line of that file. So I know the function works fine, it is just a problem with the loop that I am not able to figure out. Please help.
The variable firstLines is defined in the body of the for loop and its scope is therefore limited to this loop. This means you cannot access the variable outside of the loop, and this is why the Scala compiler tells you error: not found: value firstLines.
From your description, I understand you want to collect the first line of every file which are listed in lines.
The every here can translate into different construct in Scala. We can use something like the for loop you wrote or even better adopt a functional approach and use a map function applied on the list of files. In the code below I put inside the map the code you used in your description, which creates an HadoopRDD and applies flatMap with your function to retrieve the first line of a file.
We then obtain a list of RDD[String] of lines. At this stage, note that we have not started to do any actual work. To trigger the evaluation of the RDDs and collect the result, we need an addition call to the collect method for each of the RDD we have in our list.
// Renamed "lines" to "files" as it is more explicit.
val fileNames = Source.fromFile("path/to/file1.txt").getLines.toList
val firstLinesRDDs = fileNames.map(sc.hadoopFile(_,classOf[TextInputFormat],classOf[LongWritable],classOf[Text]).flatMap {
case (k, v) => if (k.get == 0) Seq(v.toString) else Seq.empty[String]
})
// firstLinesRDDs is a list of RDD[String]. Based on this code, each RDD
// should consist in a single String value. We collect them using RDD#collect:
val firstLines = firstLinesRDDs.map(_.collect)
However, this approach suffers from a flaw which prevent us to benefit from any advantage Spark can provide.
When we apply the operation in map to filenames, we are not working with an RDD, hence the file names are processed sequentially on the driver (the process which hosts your Spark session) and not part of a parallelizable Spark job. This is equivalent to doing what you wrote in your second block of code, one file name at a time.
To address the problem, what can we do? A good thing to keep in mind when working with Spark is to try to push the declaration of the RDDs as early as possible in our code. Why? Because this allows Spark to parallelize and optimize the work we want to do. Your example could be a textbook illustration of this concept, though an additional complexity here is added by the requirement to manipulate files.
In our present case, we can benefit from the fact that hadoopFile accepts comma-separated files in input. Therefore, instead of sequentially creating RDDs for every file, we create one RDD for all of them:
val firstLinesRDD = sc.hadoopFile(fileNames.mkString(","), classOf[TextInputFormat],classOf[LongWritable],classOf[Text]).flatMap {
case (k, v) => if (k.get == 0) Seq(v.toString) else Seq.empty[String]
}
And we retrieve our first lines with a single collect:
val firstLines = firstLinesRDD.collect

Spark - create RDD of (label, features) pairs from CSV file

I have a CSV file and want to perform a simple LinearRegressionWithSGD on the data.
A sample data is as follow (the total rows in the file is 99 including labels) and the objective is to predict the y_3 variable:
y_3,x_6,x_7,x_73_1,x_73_2,x_73_3,x_8
2995.3846153846152,17.0,1800.0,0.0,1.0,0.0,12.0
2236.304347826087,17.0,1432.0,1.0,0.0,0.0,12.0
2001.9512195121952,35.0,1432.0,0.0,1.0,0.0,5.0
992.4324324324324,17.0,1430.0,1.0,0.0,0.0,12.0
4386.666666666667,26.0,1430.0,0.0,0.0,1.0,25.0
1335.9036144578313,17.0,1432.0,0.0,1.0,0.0,5.0
1097.560975609756,17.0,1100.0,0.0,1.0,0.0,5.0
3526.6666666666665,26.0,1432.0,0.0,1.0,0.0,12.0
506.8421052631579,17.0,1430.0,1.0,0.0,0.0,5.0
2095.890410958904,35.0,1430.0,1.0,0.0,0.0,12.0
720.0,35.0,1430.0,1.0,0.0,0.0,5.0
2416.5,17.0,1432.0,0.0,0.0,1.0,12.0
3306.6666666666665,35.0,1800.0,0.0,0.0,1.0,12.0
6105.974025974026,35.0,1800.0,1.0,0.0,0.0,25.0
1400.4624277456646,35.0,1800.0,1.0,0.0,0.0,5.0
1414.5454545454545,26.0,1430.0,1.0,0.0,0.0,12.0
5204.68085106383,26.0,1800.0,0.0,0.0,1.0,25.0
1812.2222222222222,17.0,1800.0,1.0,0.0,0.0,12.0
2763.5928143712576,35.0,1100.0,1.0,0.0,0.0,12.0
I already read the data with the following command:
val data = sc.textFile(datadir + "/data_2.csv");
When I want to create a RDD of (label, features) pairs with the following command:
val parsedData = data.map { line =>
val parts = line.split(',')
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
}.cache()
So I can not continue for training a model, any help?
P.S. I run the spark with Scala IDE in Windows 7 x64.
After lots of efforts I found out the solution. The first problem was related to the header rows and the second was related to mapping function. Here is the complete solution:
//To read the file
val csv = sc.textFile(datadir + "/data_2.csv");
//To find the headers
val header = csv.first;
//To remove the header
val data = csv.filter(_(0) != header(0));
//To create a RDD of (label, features) pairs
val parsedData = data.map { line =>
val parts = line.split(',')
LabeledPoint(parts(0).toDouble, Vectors.dense(parts(1).split(' ').map(_.toDouble)))
}.cache()
I hope it can save your time.
When you read in your file the first line
y_3,x_6,x_7,x_73_1,x_73_2,x_73_3,x_8
Is also read and transformed in your map function so you're trying to call toDouble on y_3. You need to filter out the first row and do the learning using the remaining rows.