I want to add data to redis:
object Obj1 {
val redis = new RedisClient
def insert(): Unit = {
val data = List(
(111, 222, 333),
(444, 555, 666)
)
for ((x, i) <- data.zipWithIndex) {
redis lpush (f"key1$i", x._1)
redis lpush (f"key2$i", x._2)
}
}
}
It complains at runtime:
[error] (run-main) java.lang.Exception: ERR Operation against a key holding the wrong kind of value
java.lang.Exception: ERR Operation against a key holding the wrong kind of value
And it does it due to $i for some reason. Even "key123" causes the error.
Client https://github.com/debasishg/scala-redis
Your code works absolutely fine using Scala 2.10.3 with Scala-Redis 2.11 even with String interpolation. Tried on IntelliJ with SBT 0.13.
Try updating to the latest version of the client (if you are using an older version). If you are using SBT :
libraryDependencies += "net.debasishg" % "redisclient_2.10" % "2.11"
Related
when I was trying to use Chisel to build an FSM, I used Enum() as the Chisel Tutorial said. However, I encountered such errors.
my code:
val sIdle::s1::s2::s3::s4::Nil = Enum(UInt(), 5)
however, when I executed sbt run, it printed out that
[error] /Users/xxx.scala:28:3: object java.lang.Enum is not a value
[error] Enum(UInt(),5)
[error] ^
My build sbt file is
scalaVersion := "2.11.12"
resolvers ++= Seq(
Resolver.sonatypeRepo("snapshots"),
Resolver.sonatypeRepo("releases")
)
libraryDependencies += "edu.berkeley.cs" %% "chisel3" % "3.1.+"
Please help!
Turning my comment into a full answer so it's more obvious for future people.
In chisel3, a lot of things that were in package Chisel in Chisel2 were moved in to package chisel3.util. You can use the ScalaDoc API to search for things like Enum or switch to see where they are located (and other associated documentation).
Also in chisel3, Enum(type, size) has been deprecated in favor if Enum(size), ie. you should use:
import chisel3._
import chisel3.util.Enum
val sIdle :: s1 :: s2 :: s3 :: s4 :: Nil = Enum(5)
I would also like to mention we have a new "ChiselEnum" coming that provides more functionality than the existing API and we intend to extend it's functionality further. If you build chisel3 from source you can use it already, or you can wait for the release of 3.2. Example of new enum:
import chisel3._
import chisel3.experimental.ChiselEnum
object EnumExample extends ChiselEnum {
val e0, e1, e2 = Value // Assigns default values starting at 0
val e100 = Value(100.U) // Can provide specific values if desired
}
import EnumExample._
val myState = Reg(EnumExample()) // Can give a register the actual type instead of just UInt
myState := e100
By default Enum references java.lang.Enum. Chisel has its own Enum object, which you have to import before using:
import Chisel.Enum
import Chisel.UInt
val sIdle::s1::s2::s3::s4::Nil = Enum(UInt(), 5)
// Or an alternative way to unpack a List:
// val List(sIdle, s1, s2, s3, s4) = Enum(UInt(), 5)
I'am trying to run a simple program that copies the content of an rdd into a Hbase table. I'am using spark-hbase-connector by nerdammer https://github.com/nerdammer/spark-hbase-connector. I'am running the code using spark-submit on a local cluster on my machine. Spark version is 2.1.
this is the code i'am trying tu run :
import org.apache.spark.{SparkConf, SparkContext}
import it.nerdammer.spark.hbase._
object HbaseConnect {
def main(args: Array[String]) {
val sparkConf = new SparkConf()
sparkConf.set("spark.hbase.host", "hostname")
sparkConf.set("zookeeper.znode.parent", "/hbase-unsecure")
val sc = new SparkContext(sparkConf)
val rdd = sc.parallelize(1 to 100)
.map(i => (i.toString, i+1, "Hello"))
rdd.toHBaseTable("mytable").toColumns("column1", "column2")
.inColumnFamily("mycf")
.save()
sc.stop
}}
Here is my build.sbt:
name := "HbaseConnect"
version := "0.1"
scalaVersion := "2.11.8"
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs # _*) => MergeStrategy.discard
case x => MergeStrategy.first}
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "2.1.0" % "provided",
"it.nerdammer.bigdata" % "spark-hbase-connector_2.10" % "1.0.3")
the execution gets stuck showing the following info:
17/11/22 10:20:34 INFO ZooKeeperRegistry: ClusterId read in ZooKeeper is null
17/11/22 10:20:34 INFO TableOutputFormat: Created table instance for mytable
I am unable to indentify the problem with zookeeper. The HBase clients will discover the running HBase cluster using the following two properties:
1.hbase.zookeeper.quorum: is used to connect to the zookeeper cluster
2.zookeeper.znode.parent. tells which znode keeps the data (and address for HMaster) for the cluster
I overridden these two properties in the code. with
sparkConf.set("spark.hbase.host", "hostname")
sparkConf.set("zookeeper.znode.parent", "/hbase-unsecure")
Another question is that there is no spark-hbase-connector_2.11. can the provided version spark-hbase-connector_2.10 support scala 2.11 ?
Problem is solved. I had to override the Hmaster port to 16000 (wich is my Hmaster port number. I'am using ambari). Default value that sparkConf uses is 60000.
sparkConf.set("hbase.master", "hostname"+":16000").
I'm trying to use pureConfig and configFactory for my spark application configuration.
here is my code:
import pureconfig.{loadConfigOrThrow}
object Source{
def apply(keyName: String, configArguments: Config): Source = {
keyName.toLowerCase match {
case "mysql" =>
val properties = loadConfigOrThrow[DBConnectionProperties](configArguments)
new MysqlSource(None, properties)
case "files" =>
val properties = loadConfigOrThrow[FilesSourceProperties](configArguments)
new Files(properties)
case _ => throw new NoSuchElementException(s"Unknown Source ${keyName.toLowerCase}")
}
}
}
import Source
val config = ConfigFactory.parseString(result.mkString("\n"))
val source = Source("mysql",config.getConfig("source.mysql"))
when I run it from the IDE (intelliJ) or directly from java
(i.e java jar...) it works fine.
But when I run it with spark-submit it fails with the following error:
Exception in thread "main" java.lang.NoSuchMethodError: shapeless.Witness$.mkWitness(Ljava/lang/Object;)Lshapeless/Witness;
From a quick search I found a similar similar to this question.
which suggest the reason for this is due to the fact both spark and pureConfig depends on Shapeless but with different versions,
I tried to shade it as suggested in the answer
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("shapeless.**" -> "shadeshapless.#1")
.inLibrary("com.github.pureconfig" %% "pureconfig" % "0.7.0").inProject
)
but it didn't work as well
can it be from a different reason?
any idea what may work?
Thanks
You also have to shade shapeless inside its own JAR, in addition to pureconfig:
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("shapeless.**" -> "shadeshapless.#1")
.inLibrary("com.chuusai" % "shapeless_2.11" % "2.3.2")
.inLibrary("com.github.pureconfig" %% "pureconfig" % "0.7.0")
.inProject
)
Make sure to add the correct shapeless version.
I googled a lot and am totally stuck now. I know, that there are similar questions but please read to the end. I have tried all proposed solutions and none did work.
I am trying to use the IMain class from scala.tools.nsc within a Play 2.1 project (Using Scala 2.10.0).
Controller Code
This is the code, where I try to use the IMain in a Websocket. This is only for testing.
object Scala extends Controller {
def session = WebSocket.using[String] { request =>
val interpreter = new IMain()
val (out,channel) = Concurrent.broadcast[String]
val in = Iteratee.foreach[String]{ code =>
interpreter.interpret(code) match {
case Results.Error => channel.push("error")
case Results.Incomplete => channel.push("incomplete")
case Results.Success => channel.push("success")
}
}
(in,out)
}
}
As soon as something gets sent over the Websocket the following error gets logged by play:
Failed to initialize compiler: object scala.runtime in compiler mirror not found.
** Note that as of 2.8 scala does not assume use of the java classpath.
** For the old behavior pass -usejavacp to scala, or if using a Settings
** object programatically, settings.usejavacp.value = true.
Build.scala
object ApplicationBuild extends Build {
val appName = "escalator"
val appVersion = "1.0-SNAPSHOT"
val appDependencies = Seq(
"org.scala-lang" % "scala-compiler" % "2.10.0"
)
val main = play.Project(appName, appVersion, appDependencies).settings(
)
}
What I have tried so far
All this didn't work:
I have included fork := true in the Build.scala
A Settings object with:
embeddedDefaults[MyType]
usejavacp.value = true
The soultion proposed as answer to Question Embedded Scala REPL inherits parent classpath
I dont know what to do now.
The problem here is that sbt doesnt add scala-library to the class path.
The following workaround works.
First create a folder lib in the top project directory(the parent of app,conf etc) and copy there the scala-library.jar
Then you can use the following code to host an interpreter :
val settings = new Settings
settings.bootclasspath.value +=scala.tools.util.PathResolver.Environment.javaBootClassPath + File.pathSeparator + "lib/scala-library.jar"
val in = new IMain(settings){
override protected def parentClassLoader = settings.getClass.getClassLoader()
}
val res = in.interpret("val x = 1")
The above creates the bootclasspath by adding to the java class the scala library. It's not a problem with play framework it comes from the sbt. The same problem occures for any scala project when it runs with sbt. Tested with a simple project. When it runs from eclipse its works fine.
EDIT: Link to sample project demonstrating the above.`
I wonder if the reflect jar is missing. Try adding this too in appDependencies.
"org.scala-lang" % "scala-reflect" % "2.10.0"
I would like to make my ScalaCheck property tests in my specs2 test suite deterministic, temporarily, to ease debugging. Right now, different values could be generated each time I re-run the test suite, which makes debugging frustrating, because you don't know if a change in observed behaviour is caused by your code changes, or just by different data being generated.
How can I do this? Is there an official way to set the random seed used by ScalaCheck?
I'm using sbt to run the test suite.
Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?
If you're using pure ScalaCheck properties, you should be able to use the Test.Params class to change the java.util.Random instance which is used and provide your own which always return the same set of values:
def check(params: Test.Parameters, p: Prop): Test.Result
[updated]
I just published a new specs2-1.12.2-SNAPSHOT where you can use the following syntax to specify your random generator:
case class MyRandomGenerator() extends java.util.Random {
// implement a deterministic generator
}
"this is a specific property" ! prop { (a: Int, b: Int) =>
(a + b) must_== (b + a)
}.set(MyRandomGenerator(), minTestsOk -> 200, workers -> 3)
As a general rule, when testing on non-deterministic inputs you should try to echo or save those inputs somewhere when there's a failure.
If the data is small, you can include it in the label or error message that gets shown to the user; for example, in an xUnit-style test: (since I'm new to Scala syntax)
testLength(String x) {
assert(x.length > 10, "Length OK for '" + x + "'");
}
If the data is large, for example an auto-generated DB, you might either store it in a non-volatile location (eg. /tmp with a timestamped name) or show the seed used to generate it.
The next step is important: take that value, or seed, or whatever, and add it to your deterministic regression tests, so that it gets checked every time from now on.
You say you want to make ScalaCheck deterministic "temporarily" to reproduce this issue; I say you've found a buggy edge-case which is well-suited to becoming a unit test (perhaps after some manual simplification).
Bonus question: Is there an official way to print out the random seed used by ScalaCheck, so that you can reproduce even a non-deterministic test run?
From specs2-scalacheck version 4.6.0 this is now a default behaviour:
Given the test file HelloSpec:
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
class HelloSpec extends Specification with ScalaCheck {
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
class HelloSpec extends Specification with ScalaCheck {
s2"""
a simple property $ex1
"""
def ex1 = prop((s: String) => s.reverse.reverse must_== "")
}
build.sbt config:
import Dependencies._
ThisBuild / scalaVersion := "2.13.0"
ThisBuild / version := "0.1.0-SNAPSHOT"
ThisBuild / organization := "com.example"
ThisBuild / organizationName := "example"
lazy val root = (project in file("."))
.settings(
name := "specs2-scalacheck",
libraryDependencies ++= Seq(
specs2Core,
specs2MatcherExtra,
specs2Scalacheck
).map(_ % "test")
)
project/Dependencies:
import sbt._
object Dependencies {
lazy val specs2Core = "org.specs2" %% "specs2-core" % "4.6.0"
lazy val specs2MatcherExtra = "org.specs2" %% "specs2-matcher-extra" % specs2Core.revision
lazy val specs2Scalacheck = "org.specs2" %% "specs2-scalacheck" % specs2Core.revision
}
When you run the test from the sbt console:
sbt:specs2-scalacheck> testOnly example.HelloSpec
You get the following output:
[info] HelloSpec
[error] x a simple property
[error] Falsified after 2 passed tests.
[error] > ARG_0: "\u0000"
[error] > ARG_0_ORIGINAL: "猹"
[error] The seed is X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=
[error]
[error] > '' != '' (HelloSpec.scala:11)
[info] Total for specification HelloSpec
To reproduce that specific run (i.e with the same seed)You can take the seed from the output and pass it using the command line scalacheck.seed:
sbt:specs2-scalacheck>testOnly example.HelloSpec -- scalacheck.seed X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=
And this produces the same output as before.
You can also set the seed programmatically using setSeed:
def ex1 = prop((s: String) => s.reverse.reverse must_== "").setSeed("X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=")
Yet another way to provide the Seed is pass an implicit Parameters where the seed is set:
package example
import org.specs2.mutable.Specification
import org.specs2.ScalaCheck
import org.scalacheck.rng.Seed
import org.specs2.scalacheck.Parameters
class HelloSpec extends Specification with ScalaCheck {
s2"""
a simple property $ex1
"""
implicit val params = Parameters(minTestsOk = 1000, seed = Seed.fromBase64("X5CS2sVlnffezQs-bN84NFokhAfmWS4kAg8_gJ6VFIP=").toOption)
def ex1 = prop((s: String) => s.reverse.reverse must_== "")
}
Here is the documentation about all those various ways.
This blog also talks about this.
For scalacheck-1.12 this configuration worked:
new Test.Parameters {
override val rng = new scala.util.Random(seed)
}
For scalacheck-1.13 it doesn't work anymore since the rng method is removed. Any thoughts?