how to run first example of Apache Flink - scala

I am trying to run the first example from the oreilly book "Stream Processing with Apache Flink" and from the flink project. Each gives different errors
Example from the book gies NoClassDefFound error
Example from flink project gives java.net.ConnectException: Connection refused (Connection refused) but does create a flink job, see screenshot.
Detail below
Book example
java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError:scala/runtime/java8/JFunction1$mcVI$sp
at io.github.streamingwithflink.chapter1.AverageSensorReadings$$anon$3.createSerializer(AverageSensorReadings.scala:50)
The instructions from the book are:
download flink-1.7.1-bin-scala_2.12.tgz
extract
start cluster ./bin/start-cluster.sh
open flink's web UI http://localhost:8081
this all works fine
Download the jar file that includes examples in this book
run example
./bin/flink run \
-c io.github.streamingwithflink.chapter1.AverageSensorReadings \
examples-scala.jar
It seems that the class is not found from error message at the top of this post.
I put the jar in the same directory I am running the command
java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (Zulu 8.44.0.9-CA-macosx) (build 1.8.0_242-b20)
OpenJDK 64-Bit Server VM (Zulu 8.44.0.9-CA-macosx) (build 25.242-b20, mixed mode)
I also tried compiling the jar myself with the same error.
https://github.com/streaming-with-flink/examples-scala.git
and
mvn clean build
error is the same.
Flink project tutorial
running the SocketWindowWordCount
./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
I get a job but it fails
gives java.net.ConnectException: Connection refused (Connection refused)
It is not clear to me what connection is refused. I tried different ports with no change.
How can I run flink code successfully?

I tried to reproduce the failing AverageSensorReadings example, but it was working on my setup. I'll try look deeper into it tomorrow.
Regarding the SocketWindowWordCount example, the error message indicates that the Flink job failed to open a connection to the socket on port 9000. You need to open the socket before you start the job. You can do this for example with netcat:
nc -l 9000
After the job is running, you can send messages by typing and and these message will be ingested into the Flink job. You can see the stats in the WebUI evolving according to the number of words that your messages consisted of.
Note that netcat closes the socket when you stop the Flink job.

I am able to run the "Stream Processing with Apache Flink" code from IntelliJ.
See this post

I am able to run the "Stream Processing with Apache Flink" AverageSensorReadings code on my flink cluster by using sbt. I have never used sbt before but thought I would try it. My project is here
Note that I moved AverageSensorReading.scala to chapter5 a since that is where the code is explained and changed the package to com.mitzit.
use sbt assembly to create jar
run on flink cluster
./bin/flink run \
-c com.mitzit.chapter5.AverageSensorReadings \
/path/to/project/sbt-flink172/target/scala-2.11/sbt-flink172-assembly-0.1.jar
works fine. I have no idea why this works and the mvn compiled jar does not.

Related

apache flink doesnt connect to port 8081

Hi i am new to apache flink and i am trying to run a batch wordcount example to start learning about it.I have run
./bin/start-cluster.sh
and then i executed ./bin/flink run ./examples/batch/WordCount.jar --input test.txt --output out.txt
and i get the following
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8081
console messages
so i think its about server connection error and i tried some things like xampp but nothing better
So what's your opinion on that?
It seems like your cluster is not starting. Please try ./bin/start-cluster.sh again and go to http://localhost:8081/ to confirm your cluster is up. After that, the word count example should run fine after specifying the appropriate input and output files.

How to setup Apache Atlas using embedded cassandra and Apache Solr

Step 1: Clone the repository.
git clone https://github.com/apache/atlas
Step 2: Generated tar file by executing below command
mvn clean -DskipTests package -Pdist,embedded-cassandra-solr
Step 3: Once the build is successful, extracted ‘apache-atlas-3.0.0-SNAPSHOT-server.tar’ file and executed below command.
.\bin\atlas_start.py
Seen below messages in console.
Starting Atlas server on host: localhost
Starting Atlas server on port: 21000
......................
Apache Atlas Server started!!!
But When I hit the url 'http://localhost:21000/', I am getting service unavailable message.
HTTP ERROR 503 Service Unavailable
URI: /
STATUS: 503
MESSAGE: Service Unavailable
SERVLET: -
Log files are empty, not sure how to identify the issue.
Couple of Questions
a. Do I need to explicitly setup cassandra and Apache solr for emebdded mode too? In that case please suggest me a documentation.
b. even though I generated the build using embedded cassandra file, while launching the application, it was still lokking for HADOOP_HOME property. Can I know the reason for this?
I got the same problem and, after a while, I found that Zookeeper doesn't start at all; so, I stopped the Zookeeper service and restarted the installation of atlas. (Here is the link of the installation that I followed : https://manjitsingh664.medium.com/apache-atlas-installation-guide-9098df98d5c3.)
For your case, replace:
mvn clean -DskipTests package -Pdist,embedded-hbase-solr
with:
mvn clean -DskipTests package -Pdist,embedded-cassandra-solr

Remote Debugging Scala Spark Jobs with VsCode

I'd like to have the ability to remote debug a Spark job written in Scala running in a docker container with VsCode. This is what I have so far,
VsCode with scala-metals v0.8 installed
This debug launch configuration
I spin up a spark cluster with docker-compose up
I submit a spark job to the cluster started above with the following command
docker exec -it -e SPARK_SUBMIT_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=4000 -w /spark spark-job-starter_master_1 bin/spark-submit --class example.StarterSparkJob /build/example.jar
After doing so I see the output Listening for transport dt_socket at address: 4000
I attempt to launch the debugger from VsCode, but I get the error.
Debugger failed to attach: handshake failed - received >Content-Length< - expected >JDWP-Handshake<
The full details of the code are here https://github.com/aedenj/spark-job-starter/tree/vscode-debug-setup. Your guidance is appreciated.

Exception when running Spark job server in spark standalone mode

I'm trying out the Spark job server - specifically, the docker container option. I was able to run the WordCountExample app in spark local mode. However, I ran into an exception when I tried to point the app to a remote Spark master.
Following are the commands I used to run the WordCountExample app:
1. sudo docker run -d -p 8090:8090 -e SPARK_MASTER=spark://10.501.502.503:7077 velvia/spark-jobserver:0.6.0
2. sbt job-server-tests/package
3. curl --data-binary #job-server-tests/target/scala-2.10/job-server-tests_2.10-0.6.2-SNAPSHOT.jar localhost:8090/jars/test
4. curl -d "input.string = a b c a b see" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample'
Following is the exception I hit when I ran step 4 above:
{
"status": "ERROR",
"result": {
"message": "Futures timed out after [15 seconds]",
"errorClass": "java.util.concurrent.TimeoutException",
"stack": ["scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)", "scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)", "scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)", "akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169)", "scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)", "akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167)", "akka.dispatch.BatchingExecutor$Batch.blockOn(BatchingExecutor.scala:101)", "scala.concurrent.Await$.result(package.scala:107)", ...
I started the remote Spark cluster (master and workers) using
cd $SPARK_HOME
./sbin/start-all.sh
The remote cluster uses Spark version 1.5.1 (ie, the prebuilt binary spark-1.5.1-bin-hadoop2.6)
Questions
Any suggestions on how I could debug this?
Are there any logs I could look into to figure out the root cause?
Thanks in advance.
This could be a network issue. SJS server should be reachable from Spark cluster.
I had same problem with spark 1.6.1. I changed jobserver version to last (0.6.2.mesos-0.28.1.spark-1.6.1) and it works for me.

Debugging MapReduce Hadoop in local mode in eclipse. Failed to connect remote VM

I am new to hadoop and I am trying to debug MapReduce Hadoop in local mode in Eclipse in Virtualbox Ubuntu following these articles: Debug Custom Java hadoop code in local environment and Hadoop MapReduce Debugging in Local Setup
In hadoop-env.sh I put the text
export HADOOP_OPTS="$HADOOP_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8008"
I tried to run Eclipse from command line
eclipse -agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=8008
I also changed from hdfs to file:/// in core-site.xml in hadoop configurations
<name>fs.default.name</name>
<value>file:///localhost:8020</value>
I checked the port 8080. Seems like it works okay:
netstat -atn | grep 8080`
says tcp6 8080 LISTEN
http://localhost:8080 opens in browser and says Required param job, map and reduce
still everything is useless as when I try to set debug configuration with the port 8080 in Eclipse it breaks “failed to connect remote vm”.
Can anyone suggest a possible solution?
That isn't the way to run eclipse as a debugger.
Run eclipse without any command line options and setup a debug configuration for a remote java application that connects to 8008.
[EDIT]
I also think your hadoop debug options are wrong. I use:
-agentlib:jdwp=transport=dt_socket,address=8008,server=y,suspend=n