Why quill can't find driverClassName and jdbcUrl using IDEA? - scala

I try to learn quill,as the doc said:
libraryDependencies ++= Seq(
"org.xerial" % "sqlite-jdbc" % "3.28.0",
"io.getquill" %% "quill-jdbc" % "3.7.0"
//"io.getquill" %% "quill-jdbc" % "3.7.0-SNAPSHOT"
)
sqlite or mysql both use library quill-jdbc
then write this:
ctx.driverClassName=org.sqlite.JDBC
ctx.jdbcUrl=jdbc:sqlite:/path/to/db/file.db
IDEA told me "can't resolve symbol driverClassName" and "can't resolve symbol jdbcUrl"
I don't know why,should I use sbt directly instend using IDEA?I think I will got the same error
so what should I do?Thanks!

Related

What means this exception 'dotc: Bad symbolic reference.'

I want to work with Scala 3 and use some existing Libraries. The example works with Scala 2.13.
When compiling I get this exception:
dotc: Bad symbolic reference. A signature in ../dmn-engine-1.4.0.jar(org/camunda/dmn/parser/ParsedDmn.class)
refers to Serializable/T in package scala which is not available.
It may be completely missing from the current classpath, or the version on
the classpath might be incompatible with the version used when compiling ../dmn-engine-1.4.0.jar(org/camunda/dmn/parser/ParsedDmn.class).
Here is my build.sbt
lazy val extension = project
.in(file("extension"))
.settings(scalaVersion := dottyVersion,
libraryDependencies ++= Seq(
("org.camunda.bpm.extension.dmn.scala" % "dmn-engine" % "1.4.0").withDottyCompat(scalaVersion.value),
"com.novocode" % "junit-interface" % "0.11" % "test",
"org.scalatest" %% "scalatest" % "3.2.2" % Test
),
scalacOptions ++= {
if (isDotty.value) Seq("-source:3.0-migration") else Nil
}
)
Update
Ok, it works with 2.12, with 2.13 I get the following error:
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: scala/Serializable
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
...
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: scala.Serializable
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 19 more
With figuring out that it only worked with Scala 2.12, I found the problem.
The libraryDependencies "org.camunda.bpm.extension.dmn.scala" % "dmn-engine" % "1.4.0" refers to Scala 2.12.

What happened to CrossVersion.fullMapped in sbt 1.x (or specifically 1.2.8)?

In fact, I don't see any of the xMapped functions in the new release of the librarymanagement repository.
I get this error:
build.sbt:84: error: value fullMapped is not a member of object sbt.librarymanagement.CrossVersion
"org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.fullMapped{
for the following usage:
,addCompilerPlugin( // For circe generic:
"org.scalamacros" % "paradise" % "2.1.1" cross CrossVersion.fullMapped{
_ => scalaVersionSelect
}
)
CrossVersion.fullMapped wasn't kept in its sbt 0.13 form because sbt 1.x wanted to be able to serialize its key types (and CrossVersion is a transitive part of that object graph).
It was replaced by CrossVersion.fullWith, which instead of taking a general String => String function takes a String prefix and a String suffix to prepend/append to the Scala binary version.
But given you're discarding the input to use scalaVersionSelect then you don't even need CrossVersion.fullWith and can just use CrossVersion.constant, as in:
addCompilerPlugin(
"org.scalamacros" % "paradise" % "2.1.1"
cross CrossVersion.constant(scalaVersionSelect)
)

SBT: dynamically detect building platform

I am trying dynamically change reference to the jars dependency I am using in my project, depending on the platform (Windows or Linux)
So, its a very trivial scenario,
How can I implement this simple check in the build.sbt ?
Potential approach is to pattern match on System.getProperty("os.name") within a custom defined setting like so
val configureDependencyByPlatform = settingKey[ModuleID]("Dynamically change reference to the jars dependency depending on the platform")
configureDependencyByPlatform := {
System.getProperty("os.name").toLowerCase match {
case mac if mac.contains("mac") => "org.example" %% "somelib-mac" % "1.0.0"
case win if win.contains("win") => "org.example" %% "somelib-win" % "1.0.0"
case linux if linux.contains("linux") => "org.example" %% "somelib-linux" % "1.0.0"
case osName => throw new RuntimeException(s"Unknown operating system $osName")
}
}
and then add the evaluated setting to libraryDependencies as follows
libraryDependencies ++= Seq(
configureDependencyByPlatform.value,
"org.scalatest" %% "scalatest" % "3.0.5",
...
)

SBT Compiler crash using Scala-Breeze

I am writing a code to perform kernel K-Means (aka https://en.wikipedia.org/wiki/K-means_clustering, but with a trick). I need to generate data, and as a first simple generator I tried to implement a Gaussian Mixture Model. Here are my code:
package p02kmeans
import breeze.linalg._
import breeze.stats.distributions._
/**
* First data generation is simple, gaussian mixture model.
*/
object Data {
class GaussianClassParam (
val mean: Double,
val sd: Double)
/**
* #param proportion marginal probability for each label
* #param param param[j][k] returns the GaussianClassParam for the k class of the j variable
* #param nObs number of observations to be generated
* #result DenseMatrix_ij where i is the observation index and j is the variable number
*/
def gaussianMixture(
proportion: DenseVector[Double],
param: Vector[Vector[GaussianClassParam]],
nObs: Int)
: DenseMatrix[Double] = {
val nVar = param.size
val multiSampler = Multinomial(proportion) // sampler for the latent class
val varSamplerVec = param.map(v => v.map(c => Gaussian(c.mean, c.sd)))
val zi = DenseVector.fill[Int](nObs)(multiSampler.sample)
val data = DenseMatrix.tabulate[Double](nObs, nVar)((i, j) => varSamplerVec(j)(zi(i)).sample)
return data
}
}
When I try to compile my code (I use Scala-Ide and sbt eclipse on Windows 10) I get 2 errors:
Error in Scala compiler: assertion failed: List(method apply$mcI$sp, method apply$mcI$sp)
SBT builder crashed while compiling. The error message is 'assertion failed: List(method apply$mcI$sp, method apply$mcI$sp)'. Check Error Log for details.
The error is triggered by the line:
val data = DenseMatrix.tabulate[Double](nObs, nVar)((i, j) => varSamplerVec(j)(zi(i)).sample)
And disappear with:
val data = DenseMatrix.tabulate[Double](nObs, nVar)((i, j) => 12.0)
Could you help me debug this ?
My sbt configuration:
name := "Sernel"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.scalanlp" %% "breeze" % "0.13.1",
"org.scalanlp" %% "breeze-natives" % "0.13.1",
"org.scalanlp" %% "breeze-viz" % "0.13.1"
)
I have the same errors on my OSX setup.
If you want to test the whole package (as, if you want to reproduce the error), the code is available on Github: https://github.com/vkubicki/sernel, and I am available to provide directions :).
It seems like it's a compiler bug (I suppose in scala macroses as Breeze is using those). You could try to perform total clean in the project (maybe even including .ivy2 folder - this could be a difference between your MacOS and Windows setup) and also update your scala to 2.11.11 (or maybe even 2.12.x)
However, similar issue with Scala 2.11.6 (and something tells me it's inherited in subsequent versions of Scala) wasn't fixed: https://issues.scala-lang.org/browse/SI-9284
So probably, you'll have to repeatedly perform cleaning sometimes or maybe try some other NumPy analogs like: scalala, Nd4j/Ndjs.
It could also help to try another IDE (IDEA/Atom) or try to use "bare" SBT as Eclipse is probably interfering by calling Scala's compiler front-end.

Assuming many variables at once

I'm trying to set up the following constraints:
assume(E1A::integer,E2A::integer,...,E2B::integer,...,E3C::integer)
additionally(E1A>=0,E2A>=0,...,E3C>=0)
additionally(E1A<=3,E2A<=3,...,E3C<=3)
is there any way to do this without typing out all the terms E1A, E2A,...,E3C? I tried doing
for i from 0 to 3 do (assume(EiA::integer)) end do
as a shortcut, but Maple didn't like that, presumably because it didn't view the i as an indexing variable.
You can form names by concatenation.
restart:
assume( seq( cat(`E`,i,`A`)::integer, i=1..3 ) );
And now, to test,
[ seq( cat(`E`,i,`A`), i=1..3 ) ]:
map( about, % ):
Originally E1A, renamed E1A~:
is assumed to be: integer
Originally E2A, renamed E2A~:
is assumed to be: integer
Originally E3A, renamed E3A~:
is assumed to be: integer
You can also nest seq, eg,
restart:
assume( seq( seq( cat(`E`,i,abc)::integer, i=1..3), abc=[A,B,C] ) );
[ seq( seq( cat(`E`,i,abc), i=1..3), abc=[A,B,C] ) ]:
map( about, % ):
With the elementwise operator and the concatenation operator you can get all your assuming down to one line:
assume(E||(1..3)||A ::~ AndProp(integer, RealRange(0,3)));