I am new to Flink (v1.3.2) and trying to read avro record continuously in scala on EMR. I know for file you can use something like following and it will keep running and scanning directory.
val stream = env.readFile(
inputFormat = textInputFormat,
filePath = path,
watchType = FileProcessingMode.PROCESS_CONTINUOUSLY,
interval = Time.seconds(10).toMilliseconds
)
Is there a similar way in Flink for avro record? So I have the following code
val textInputFormat = new AvroInputFormat(new Path(path), classOf[User])
textInputFormat.setNestedFileEnumeration(true)
val avroInputStream = env.createInput(textInputFormat)
val output = avroInputStream.map(line => (line.getUserID, 1))
.keyBy(0)
.timeWindow(Time.seconds(10))
.sum(1)
output.print()
I am able to see the output there then Flink switched to FINISHED, but still want to get the code running/waiting for any new files arrive in the future. Is there something like FileProcessingMode.PROCESS_CONTINUOUSLY? Please suggest!
I figure out this by setting up a flink-yarn-session on EMR and make it run PROCESS_CONTINUOUSLY.
env.readFile(textInputFormat, path, FileProcessingMode.PROCESS_CONTINUOUSLY, 100)
Create a new flink yarn session using flink-yarn-session -n 2 -d
Get application_id using yarn application -list, for example, it is application_0000000000_0002
Attached flink run job with the application_id,flink run -m yarn-cluster -yid application_0000000000_0002 xxx.jar
More detail can be found on EMR documentation now: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html
Related
I am using cucumber with scala and using below jars
cucumber-junit-1.2.0.jar
cucumber-core-1.2.0.jar
cucumber-html-0.2.3.jar
cucumber-jvm-deps-1.0.3.jar
cucumber-java-1.2.0.jar
i am using cucumber framework in my big data testing, and using spark to read/write/process and data.
i am using cucumber cli.Main method to run my features
import cucumber.api.cli.Main
glue = args(0)
gluePath = args(1)
tag = args(2)
tagName = args(3)
val fileNames = args(4)
val arrFileNames = fileNames.split(",")
arrFileNames.foreach(x => sqlContext.sparkContext.addFile(x))
plugin = "-p"
pluginNameAndPath = "com.cucumber.listener.ExtentCucumberFormatter:hdfs:///tmp/target/cucumber-reports/report.html"
pluginNameAndPathJson = "json:hdfs:///tmp/target/cucumber-reports/report.json"
Main.main( Array(glue,gluePath,tag,tagName,plugin,pluginNameAndPath,plugin,pluginNameAndPathJson,SparkFiles.get("xxx.feature")
in the above code when i run in cluster mode it ran successfully but cucumber report not generated in given HDFS location.
But when i run in client mode(without hdfs:/// ) it ran successfully and created cucumber report on the local node.
It seems like cucumber does't have hdfs file system so cannot create the file in hdfs
Can anyone please help how to create cucumber report by giving the hdfs path or any other way to achieve this?
AWS Glue jobs log output and errors to two different CloudWatch logs, /aws-glue/jobs/error and /aws-glue/jobs/output by default. When I include print() statements in my scripts for debugging, they get written to the error log (/aws-glue/jobs/error).
I have tried using:
log4jLogger = sparkContext._jvm.org.apache.log4j
log = log4jLogger.LogManager.getLogger(__name__)
log.warn("Hello World!")
but "Hello World!" doesn't show up in either of the logs for the test job I ran.
Does anyone know how to go about writing debug log statements to the output log (/aws-glue/jobs/output)?
TIA!
EDIT:
It turns out the above actually does work. What was happening was that I was running the job in the AWS Glue Script editor window which captures Command-F key combinations and only searches in the current script. So when I tried to search within the page for the logging output it seemed as if it hadn't been logged.
NOTE: I did discover through testing the first responder's suggestion that AWS Glue scripts don't seem to output any log message with a level less than WARN!
Try to use built-in python logger from logging module, by default it writes messages to standard output stream.
import logging
MSG_FORMAT = '%(asctime)s %(levelname)s %(name)s: %(message)s'
DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
logging.basicConfig(format=MSG_FORMAT, datefmt=DATETIME_FORMAT)
logger = logging.getLogger(<logger-name-here>)
logger.setLevel(logging.INFO)
...
logger.info("Test log message")
I know the article is not new but maybe it could be helpful for someone:
For me logging in glue works with the following lines of code:
# create glue context
glueContext = GlueContext(sc)
# set custom logging on
logger = glueContext.get_logger()
...
#write into the log file with:
logger.info("s3_key:" + your_value)
I noticed the above answers are written in python. For Scala you could do the following
import com.amazonaws.services.glue.log.GlueLogger
object GlueApp {
def main(sysArgs: Array[String]) {
val logger = new GlueLogger
logger.info("info message")
logger.warn("warn message")
logger.error("error message")
}
}
You can find both Python and Scala solution from official doc here
Just in case this helps. This works to change the log level.
sc = SparkContext()
sc.setLogLevel('DEBUG')
glueContext = GlueContext(sc)
logger = glueContext.get_logger()
logger.info('Hello Glue')
This worked for INFO level in a Glue Python job:
import sys
root = logging.getLogger()
root.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
root.addHandler(handler)
root.info("check")
source
I faced the same problem. I resolved it by added
logging.getLogger().addHandler(logging.StreamHandler(sys.stdout))
Before there was no prints at all, even ERROR level
The idea was taken from here
https://medium.com/tieto-developers/how-to-do-application-logging-in-aws-745114ac6eb7
Another option would be to log to stdout and glue AWS logging to stdout (using stdout is actually one of the best practices in cloud logging).
Update: it works only for setLevel("WARNING") and when prints ERROR or WARING. I didn't find how to manage it for the INFO level :(
If you're just debugging, print() (Python) or println() (Scala) works just fine.
I am trying to execute a spark streaming application to process the stream of files data to perform word count.
The directory I am reading is from Windows. As shown I using the local directory like "Users/Name/Desktop/Stream".It is not HDFS.
I created a folder as "Stream" in desktop.
I started the Spark Streaming application and after that I added some text files into the folder 'Stream'. But my spark application is not able to read the files. It is always giving the empty results.
Here is my code.
//args(0) = local[2]
object WordCount {
def main(args: Array[String]) {
val ssc = new StreamingContext(args(0), "word_count",Seconds(5))
val lines = ssc.textFileStream("Users/name/Desktop/Stream")
val words = lines.flatMap(_.split(" "))
val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
}
}
Output: Getting empty data every 5 seconds
17/05/18 07:35:00 INFO Executor: Running task 0.0 in stage 71.0 (TID 35)
-------------------------------------------
Time: 1495107300000 ms
-------------------------------------------
I tried giving the path as C:/Users/name/Desktop/Stream as well - still the same issue and application could not read the files.
Can anyone please guide if I am giving the incorrect directory path ?
Your code's fine so the only issue is to use proper path to the directory. Please use file:// prefix to denote local file system that would give file://C:/Users/name/Desktop/Stream.
Please start one step at a time to confirm that our understanding is at the same level.
When you execute the Spark Streaming application, create the directory to be in the same directory where you start the application, say Stream. Once you confirm that the application works fine with the local directory we'll fix it globally to read from any directory on Windows (if that's still needed).
Please also make sure that you "move" your files as the operation to create a file in the monitored directory has to be atomic (partial writes will mark the file as processed - see StreamingContext).
Files must be written to the monitored directory by "moving" them from another location within the same file system.
As you can see in the code the directory path will eventually be "wrapped" using Hadoop's File so the issue is to convince it to accept your path:
if (_path == null) _path = new Path(directory)
My use case requires to access the file stored in HDFS from inside the spark map function. ThIs use case uses custom input format that does not provide any data to the map function whereas the map function obtains the input split and access the data. I am using the below code to do this
val hConf: Configuration = sc.hadoopConfiguration
hConf.set("fs.hdfs.impl", classOf[org.apache.hadoop.hdfs.DistributedFileSystem].getName)
hConf.set("fs.file.impl", classOf[org.apache.hadoop.fs.LocalFileSystem].getName)
var job = new Job(hConf)
FileInputFormat.setInputPaths(job,new Path("hdfs:///user/bala/MyBinaryFile"));
var hRDD = new NewHadoopRDD(sc, classOf[RandomAccessInputFormat],
classOf[IntWritable],
classOf[BytesWritable],
job.getConfiguration()
)
val count = hRDD.mapPartitionsWithInputSplit{ (split, iter) => myfuncPart(split, iter)}.collect()
As of now, I am not doing anything inside the myfuncPart. This simple returns a map as below
iter.map { tpl ⇒ (tpl._1, tpl._2.getCapacity) }
When i submit the job along with the dependencies, I get the below error
15/10/30 11:11:39 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 40.221.94.235): java.io.IOException: No FileSystem for scheme: spark
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
At first glance, it seems a small error related to spark jars but could not crack. Any help will be greatly appreciated.
It turned out to be a mistake from my side with the way I was launching the job. The command I was using did not have proper option in it. Hence, the issue. I was using the command below
spark-submit --class org.myclass --jars myjar spark://myhost:7077 myjob.jar
Below is the correct one
spark-submit --class org.myclass --jars myjar --master spark://myhost:7077 myjob.jar
This is a small mistake but somehow I missed it. Now it is working
In Scala, as an experiment I create a sequence file on Tachyon using Spark and read it back in. I want to delete the file from Tachyon using the Spark script also.
val rdd = sc.parallelize(Array(("a",2), ("b",3), ("c",1)))
rdd.saveAsSequenceFile("tachyon://127.0.0.1:19998/files/123.sf2")
val rdd2 = sc.sequenceFile[String,Int]("tachyon://127.0.0.1:19998/files/123.sf2")
I don't understand the Scala language very well and I cannot find a reference about file path manipulation. I did find a way of somehow using Java in Scala to do this, but I cannot get it to work using Tachyon.
import java.io._
new File("tachyon://127.0.0.1:19998/files/123.sf2").delete()
There are different approaches, e.g.:
CLI:
./bin/tachyon tfs rm filePath
More info: http://tachyon-project.org/Command-Line-Interface.html
API:
TachyonFS sTachyonClient = TachyonFS.get(args[0]);
sTachyonClient.delete(filePath, true);
More info:
https://github.com/amplab/tachyon/blob/master/core/src/main/java/tachyon/examples/BasicOperations.java