I built a MapReduce application and wrote it in Scala by using Akka Actor. I am trying to run 3 different JVMs in 3 Terminal Windows, one for my Client, and the other two for my remote Server (which I basically run on my localhost to simulate the distributed application). They can be compiled well. But when it comes to 'run', I saw the following exception. I am really bad at sbt and also new to Scala, Akka. Does anyone have any suggestions or ideas about how to fix it?
server side error
client side error
Updated: Nevermind, it turns out I made a silly mistake by forgetting to delete my old package name after deleted the package folder.
Related
Please explain the maven artifact differences and when to use what? for kafka-client, kafka_2.11-, scalatest-embedded-kafka_2.11. Is anything specially used for writing unit tests?
I want to understand when to use what?
In my repo, we have been using kafka_2.9.2-0.8.1.1, currently we are planning to move to kafka broker 0.9.0.1. Hence I used kafka_2.11-0.9.0.1 and also tried kafka_2.10-0.9.0.1.
When the unit tests runs, kafkaTestServer (kafkaserverstartable) always hangs internittently with kafka_2.10 and kafka_2.11
but with kafka_2.9.2-0.8.1.1 - never had hang issue.
if it proceeds, it failed with KafkaConfig init error or ScalaObject not found error.
I m kind of confused about these artifacts? Can anyone explain me about this?
The names encode the use Scala version as well as the uses Kafka version. For example kafka_2.9.2-0.8.1.1 is for Kafka 0.8.1.1 (ie, the suffix after the - is the Kafka version number and the binaries got compiled using Scala 2.9.2.
Thus, if you write code, you want to use the same Scala version as you artifact was compiled with. I assume, that the hanging and error is due to Scala version mismatch.
I would like to get started with play running play 2.5...
but even installing the template with activator "play-java" has an error.
Using unsupported version of Play: 2.5.0
I have downloaded the full activator package and the minimal.
Both fail on 2 machines(JDK 73).
NO clue how to catch up on this, perhaps some could help me out. I would appreciate it very much.
Bets regards
Using 'activator ui', causes the activator to cycle. The error message is "Getting 'Using unsupported version of Play: 2.5.0'". The issue tracking the problem is https://github.com/typesafehub/activator/issues/1102.
Using 'activator' without the ui argument is a work-around.
Change directories to the play-java directory and running ./activator without the ui argument. This brings up the sbt command line. Sbt commands help, about, tasks, update, compile, test, and run work. A web browser pointed at localhost:9000 will contain a text field with "You're using Play 2.5.0" as content.
The "run" command starts a Netty server. Application interaction is possible using localhost:9000 in a web browser.
As a workaround you can create a project without the UI by using something like
activator new my-app play-scala
Play framework requires java 8 since 2.4 version
https://www.playframework.com/documentation/2.4.x/Migration24#Java-8-support
I have wrote a function which technically and logically is correct and while running it runs perfectly locally. But on server I am getting the above error which I could not figure out. I am on protractor version 1.6.0 as same as server but still not able to figure out what causes this.
Scenario is like, I have created many it functions inside my spec.js. But when I ran my spec starting two it functions execute precisely, but on third it scripts fails with above error on server. But when I ran the same spec on my local system with similar settings and configuration, it runs smoothly. Please give me any suggestion if you have encounter the same issue earlier. You can also share the link of a blog if you have any. I am a newbie. Thanks in advance.
Actually , I did a mistake, when we use non angular locators we have to use findElementDriver instead of findElement. Locally, its runs whithout any issues but on server it gives the above error.
I've been wracking my brain over this for the last two days trying to get it to work. I have a local Spark installation on my Mac that I'm trying to attach a debugger to. I set:
SPARK_JAVA_OPTS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005
Then I submit my job to spark-submit and launch my debug configuration in eclipse which is configured as a Socket Attach Remote Debugging session. The debugger attaches, my job resumes and executes, but none of my breakpoints are ever hit, no matter what I do.
The only way I can get it to hit a breakpoint is by attaching to a spark-shell, creating a Java Exception breakpoint and issuing
throw new java.lang.Exception()
The debugger will not stop at normal breakpoints for me.
I created a standalone Hellow World scala app and was able to attach to it and have it stop at a regular breakpoint without any issues.
Environment: Mac OS, latest Eclipse, latest Scala IDE, Spark 1.3.1, Scala 2.10.5
Thanks in advance.
I had a similar issue and there were 2 things that fixed my problem -
1. The .jar file and source not a little out of sync for me , so had to recompile and redeploy.
2. Next On the JAVAOPTS I had a suspend=n.
After correcting these two it worked for me .
I noticed my Scala IDE consuming all the available CPU... I tried to compile the project via SBT from the command line and I got the same situation.
How can I get to know what's going wrong? Is there a way to find out what file or class/object/trait is being compiled?
I'm getting the same issue in 2.10.2 and 2.10.4-RC1
I found out the problem was due to importing a library from Slick 2.0 called heterogenous.syntax. I also got a contact with the Slick developing team in order to give'em some code to investigate on.