Remote Debugging Scala Spark Jobs with VsCode - scala

I'd like to have the ability to remote debug a Spark job written in Scala running in a docker container with VsCode. This is what I have so far,
VsCode with scala-metals v0.8 installed
This debug launch configuration
I spin up a spark cluster with docker-compose up
I submit a spark job to the cluster started above with the following command
docker exec -it -e SPARK_SUBMIT_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=4000 -w /spark spark-job-starter_master_1 bin/spark-submit --class example.StarterSparkJob /build/example.jar
After doing so I see the output Listening for transport dt_socket at address: 4000
I attempt to launch the debugger from VsCode, but I get the error.
Debugger failed to attach: handshake failed - received >Content-Length< - expected >JDWP-Handshake<
The full details of the code are here https://github.com/aedenj/spark-job-starter/tree/vscode-debug-setup. Your guidance is appreciated.

Related

Run Docker through Xcode XCTestCase

I'm setting up tests for my Xcode Multiplatform App. To create a test environment I created a docker image that I want to run an XCTestCase against. I understand I'd have to make sure Docker is running before running the test. That said I'm having problems getting docker to build (run or kill).
I'm using a makefile to store the commands and planned to run the docker build and docker run commands in the setUpWithError while running the docker kill command in tearDownWithError. To run the commands I used Process to execute the shell commands. This works with ls but when I run the docker command I'm told docker: No such file or directory. Through ls I know I'm in the right location where the files exist. When switching to the terminal it works.
Is there something blocking docker from being run from my XCTestCase? If so is there anyway to get this to work or do I need to manually start docker and the docker container before running these tests?
Update
Got Docker running as it was a pathway issue. Rather than saying docker build... I now give it the entire path so /usr/local/bin/docker build.... New issue is that Xcode doesn't give me the permission to run this. I get:
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create?name=remarkable": dial unix /var/run/docker.sock: connect: operation not permitted.
See 'docker run --help'.
Is there a way to allow me to run this in ONLY this XCTestCase?

apache flink doesnt connect to port 8081

Hi i am new to apache flink and i am trying to run a batch wordcount example to start learning about it.I have run
./bin/start-cluster.sh
and then i executed ./bin/flink run ./examples/batch/WordCount.jar --input test.txt --output out.txt
and i get the following
org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:8081
console messages
so i think its about server connection error and i tried some things like xampp but nothing better
So what's your opinion on that?
It seems like your cluster is not starting. Please try ./bin/start-cluster.sh again and go to http://localhost:8081/ to confirm your cluster is up. After that, the word count example should run fine after specifying the appropriate input and output files.

how to run first example of Apache Flink

I am trying to run the first example from the oreilly book "Stream Processing with Apache Flink" and from the flink project. Each gives different errors
Example from the book gies NoClassDefFound error
Example from flink project gives java.net.ConnectException: Connection refused (Connection refused) but does create a flink job, see screenshot.
Detail below
Book example
java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError:scala/runtime/java8/JFunction1$mcVI$sp
at io.github.streamingwithflink.chapter1.AverageSensorReadings$$anon$3.createSerializer(AverageSensorReadings.scala:50)
The instructions from the book are:
download flink-1.7.1-bin-scala_2.12.tgz
extract
start cluster ./bin/start-cluster.sh
open flink's web UI http://localhost:8081
this all works fine
Download the jar file that includes examples in this book
run example
./bin/flink run \
-c io.github.streamingwithflink.chapter1.AverageSensorReadings \
examples-scala.jar
It seems that the class is not found from error message at the top of this post.
I put the jar in the same directory I am running the command
java -version
openjdk version "1.8.0_242"
OpenJDK Runtime Environment (Zulu 8.44.0.9-CA-macosx) (build 1.8.0_242-b20)
OpenJDK 64-Bit Server VM (Zulu 8.44.0.9-CA-macosx) (build 25.242-b20, mixed mode)
I also tried compiling the jar myself with the same error.
https://github.com/streaming-with-flink/examples-scala.git
and
mvn clean build
error is the same.
Flink project tutorial
running the SocketWindowWordCount
./bin/flink run examples/streaming/SocketWindowWordCount.jar --port 9000
I get a job but it fails
gives java.net.ConnectException: Connection refused (Connection refused)
It is not clear to me what connection is refused. I tried different ports with no change.
How can I run flink code successfully?
I tried to reproduce the failing AverageSensorReadings example, but it was working on my setup. I'll try look deeper into it tomorrow.
Regarding the SocketWindowWordCount example, the error message indicates that the Flink job failed to open a connection to the socket on port 9000. You need to open the socket before you start the job. You can do this for example with netcat:
nc -l 9000
After the job is running, you can send messages by typing and and these message will be ingested into the Flink job. You can see the stats in the WebUI evolving according to the number of words that your messages consisted of.
Note that netcat closes the socket when you stop the Flink job.
I am able to run the "Stream Processing with Apache Flink" code from IntelliJ.
See this post
I am able to run the "Stream Processing with Apache Flink" AverageSensorReadings code on my flink cluster by using sbt. I have never used sbt before but thought I would try it. My project is here
Note that I moved AverageSensorReading.scala to chapter5 a since that is where the code is explained and changed the package to com.mitzit.
use sbt assembly to create jar
run on flink cluster
./bin/flink run \
-c com.mitzit.chapter5.AverageSensorReadings \
/path/to/project/sbt-flink172/target/scala-2.11/sbt-flink172-assembly-0.1.jar
works fine. I have no idea why this works and the mvn compiled jar does not.

nc: command not found while setting up Apache Samza

When i try to setup Apache Samza, i get nc command not found when starting zookeeper.
I'm running the command:
bin/grid bootstrap
i get
/c/..../hello-samza/bin/grid: line 170: nc: command not found
I'm writing the command in git bash console on windows 10 Education.
I'm new to samza and git bash in general. How can i fix this or is there a way to integrate nc commands in git bash basic commands?
or am i running the commands in the wrong console ?

Exception when running Spark job server in spark standalone mode

I'm trying out the Spark job server - specifically, the docker container option. I was able to run the WordCountExample app in spark local mode. However, I ran into an exception when I tried to point the app to a remote Spark master.
Following are the commands I used to run the WordCountExample app:
1. sudo docker run -d -p 8090:8090 -e SPARK_MASTER=spark://10.501.502.503:7077 velvia/spark-jobserver:0.6.0
2. sbt job-server-tests/package
3. curl --data-binary #job-server-tests/target/scala-2.10/job-server-tests_2.10-0.6.2-SNAPSHOT.jar localhost:8090/jars/test
4. curl -d "input.string = a b c a b see" 'localhost:8090/jobs?appName=test&classPath=spark.jobserver.WordCountExample'
Following is the exception I hit when I ran step 4 above:
{
"status": "ERROR",
"result": {
"message": "Futures timed out after [15 seconds]",
"errorClass": "java.util.concurrent.TimeoutException",
"stack": ["scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)", "scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)", "scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)", "akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169)", "scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)", "akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167)", "akka.dispatch.BatchingExecutor$Batch.blockOn(BatchingExecutor.scala:101)", "scala.concurrent.Await$.result(package.scala:107)", ...
I started the remote Spark cluster (master and workers) using
cd $SPARK_HOME
./sbin/start-all.sh
The remote cluster uses Spark version 1.5.1 (ie, the prebuilt binary spark-1.5.1-bin-hadoop2.6)
Questions
Any suggestions on how I could debug this?
Are there any logs I could look into to figure out the root cause?
Thanks in advance.
This could be a network issue. SJS server should be reachable from Spark cluster.
I had same problem with spark 1.6.1. I changed jobserver version to last (0.6.2.mesos-0.28.1.spark-1.6.1) and it works for me.