Read data from HDFS - scala

I'm using the FSDataInputStream library to access the data from HDFS
The following is the snippet which I'm using
val fs = FileSystem.get(new java.net.URI(#HDFS_URI),new Configuration())
val stream = fs.open(new Path(#PATH))
val reader = new BufferedReader(new InputStreamReader(stream))
val offset:String = reader.readLine() #Reads the string "5432" stored in the file
Expected output is "5432".
But the actual output is "^#5^#4^#3^#2"
Not able to trim "^#" since they are not considered as characters.Please help with appropriate solution.

Related

java.lang.IllegalArgumentException when HDFS file creating

I have HDFS and some text, I want to create file with text. I tried to use HDFS api and FSDataOutputStream, but got an exception. Could you help me please resolve it.
The exception is:
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs:/user/user1, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:647)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:82)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:513)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:499)
at org.apache.hadoop.fs.ChecksumFileSystem.mkdirs(ChecksumFileSystem.java:594)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:448)
at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:776)
at com.example.FileBuilder$.buildFile(FileBuilder.scala:23)
The code is
val fs = FileSystem.get(new Configuration())
val path = new Path(s"hdfs:////user/" + "fileName.sql")
val fsDataOutputStream = fs.create(path)
val outputStreamWriter = new OutputStreamWriter(fsDataOutputStream, "UTF-8")
val bufferedWriter = new BufferedWriter(outputStreamWriter)
bufferedWriter.write(data)
bufferedWriter.close()
outputStreamWriter.close()
fsDataOutputStream.close()
I think there is some problem with the file path. Can you test by replace below portion of code in yours.
Configuration configuration = new Configuration();
FileSystem fs = FileSystem.get(new URI(<url:port>), configuration);
Path filePath = new Path("/user/fileName.sql");
val fsDataOutputStream = fs.create(path)

Writing file using FileSystem to S3 (Scala)

I'm using scala , and trying to write file with string content,
to S3.
I've tried to do that with FileSystem ,
but I getting an error of:
"Wrong FS: s3a"
val content = "blabla"
val fs = FileSystem.get(spark.sparkContext.hadoopConfiguration)
val s3Path: Path = new Path("s3a://bucket/ha/fileTest.txt")
val localPath= new Path("/tmp/fileTest.txt")
val os = fs.create(localPath)
os.write(content.getBytes)
fs.copyFromLocalFile(localPath,s3Path)
and i'm getting an error:
java.lang.IllegalArgumentException: Wrong FS: s3a://...txt, expected: file:///
What is wrong?
Thanks!!
you need to ask for the specific filesystem for that scheme, then you can create a text file directly on the remote system.
val s3Path: Path = new Path("s3a://bucket/ha/fileTest.txt")
val fs = s3Path.getFilesystem(spark.sparkContext.hadoopConfiguration)
val os = fs.create(s3Path, true)
os.write("hi".getBytes)
os.close
There's no need to write locally and upload; the s3a connector will buffer and upload as needed

Cannot write a string to hdfs file using scala

I wrote some code to create a file in hdfs and write bytes to it. This is the code:
def write(uri: String, filePath: String, data: String): Unit = {
System.setProperty("HADOOP_USER_NAME", "hibou")
val path = new Path(filePath + "/hello.txt")
val conf = new Configuration()
conf.set("fs.defaultFS", uri)
val fs = FileSystem.get(conf)
val os = fs.create(path)
os.writeBytes(data)
os.flush()
fs.close()
}
The code success without error but I only see that file created. When I examine the content of the file with hdfs -dfs -cat /.../hello.txt I don't see any content?

Writing to a file in Apache Spark

I am writing a Scala code that requires me to write to a file in HDFS.
When I use Filewriter.write on local, it works. The same thing does not work on HDFS.
Upon checking, I found that there are the following options to write in Apache Spark-
RDD.saveAsTextFile and DataFrame.write.format.
My question is: what if I just want to write an int or string to a file in Apache Spark?
Follow up:
I need to write to an output file a header, DataFrame contents and then append some string.
Does sc.parallelize(Seq(<String>)) help?
create RDD with your data (int/string) using Seq: see parallelized-collections for details:
sc.parallelize(Seq(5)) //for writing int (5)
sc.parallelize(Seq("Test String")) // for writing string
val conf = new SparkConf().setAppName("Writing Int to File").setMaster("local")
val sc = new SparkContext(conf)
val intRdd= sc.parallelize(Seq(5))
intRdd.saveAsTextFile("out\\int\\test")
val conf = new SparkConf().setAppName("Writing string to File").setMaster("local")
val sc = new SparkContext(conf)
val stringRdd = sc.parallelize(Seq("Test String"))
stringRdd.saveAsTextFile("out\\string\\test")
Follow up Example: (Tested as below)
val conf = new SparkConf().setAppName("Total Countries having Icon").setMaster("local")
val sc = new SparkContext(conf)
val headerRDD= sc.parallelize(Seq("HEADER"))
//Replace BODY part with your DF
val bodyRDD= sc.parallelize(Seq("BODY"))
val footerRDD = sc.parallelize(Seq("FOOTER"))
//combine all rdds to final
val finalRDD = headerRDD ++ bodyRDD ++ footerRDD
//finalRDD.foreach(line => println(line))
//output to one file
finalRDD.coalesce(1, true).saveAsTextFile("test")
output:
HEADER
BODY
FOOTER
more examples here. . .

Reading an Avro file from scala

I'm trying to read an avro file using scala.
I've extracted the file's schema using avro-tools and saved it to a file, I then try to read it using the following code:
val zibi= scala.io.Source.fromFile("/home/wasabi/schema").mkString
val schema_obj = new Schema.Parser
val schema2 = schema_obj.parse(zibi)
val READER2 = new GenericDatumReader[GenericRecord](schema2)
val myFile = Files.readAllBytes(Paths.get("/tmp/check/CMRF_80_1442744555901-1_1_2_1_1_1_4_10_1.avro"))
val datum = READER2.read(null, DecoderFactory.defaultFactory.createBinaryDecoder(myFile,null))
But I keep hitting IOExceptions as such:
java.io.IOException: Invalid int encoding
at org.apache.avro.io.BinaryDecoder.readInt(BinaryDecoder.java:145)
at org.apache.avro.io.ValidatingDecoder.readInt(ValidatingDecoder.java:83)
at org.apache.avro.generic.GenericDatumReader.readInt(GenericDatumReader.java:444)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:159)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151)
at org.apache.avro.generic.GenericDatumReader.readArray(GenericDatumReader.java:219)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:193)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:183)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:151)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:142)
When I'm reading the file through avro-tools it reads just fine.
What am I doing wrong?
Try using a DataFileReader instead of using a BinaryDecoder.
While Encoder/Decoders are used for writing and reading raw avros, I suspect that they are choking on the header info found in avro datafiles.
import org.apache.avro.generic.{ GenericDatumReader, GenericRecord }
import org.apache.avro.file.DataFileReader
val zibi= scala.io.Source.fromFile("/home/wasabi/schema").mkString
val schema_obj = new Schema.Parser
val schema2 = schema_obj.parse(zibi)
val READER2 = new GenericDatumReader[GenericRecord](schema2)
val myFile = new File("/tmp/check/CMRF_80_1442744555901-1_1_2_1_1_1_4_10_1.avro")
val dataFileReader = new DataFileReader[GenericRecord](myFile, READER2)
val datum = dataFileReader.next()