I'm trying to create a client for ES in SCALA [ school project ] .
but when I want to import Elastic search I got some problems
I've written a sbt file :
libraryDependencies += "org.elasticsearch" %% "elasticsearch" % "1.4.2"
libraryDependencies += "org.apache.lucene" % "lucene-core" % "4.10.2"
with other lucene
and when I try to use it :
import org.elasticsearch.node.Nodebuilder.*
object Setup {
Node node = nodeBuilder().node();
Client client = node.client();
}
it does recognize org.elasticsearch.node. but not .Nodebuilder.
Anyone has an idea ?
solved
import org.elasticsearch.node.NodeBuilder.nodeBuilder
val node = nodeBuilder().node()
val client = node.client()
I would suggest you to use the following library: https://github.com/sksamuel/elastic4s
Related
Im trying to install setpropex utility for Android emulator on my ubuntu VM. Build.sbt file has the following code
import sbt.IO
import sbt.Keys._
import sbt._
name := "setpropex"
scalaVersion := "2.11.8"
exportJars := true
test in assembly := {}
libraryDependencies += "net.java.dev.jna" % "jna" % "4.2.1"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.6" % "test"
val dalvikTask = TaskKey[Unit]("dalvik", "build a command line program for Android")
(dalvikTask := (assembly, streams) map { (asm, s) =>
val appName = "setpropex"
val dexFileName = s"$appName.dex"
s"dx --dex --output=$dexFileName ${asm.getPath}"
val target = new File(appName)
IO.copyFile(new File("self-running.sh"), target , true)
val appData="data.tar"
s"tar -cf $appData run.sh com $dexFileName"
IO.append(target, IO.readBytes(new File(appData)))
s"chmod +x $appName"
s.log.info(s"Android cmdline app: $appName build successful!")
s.log.info("Output: " + new File(".").getCanonicalPath + "/" + appName)
s"adb push $appName /data/local/tmp"
s"adb shell /data/local/tmp/$appName"
})
When I run the command "sbt dalvik", I get the following error
error: value map is not a member of (sbt.TaskKey[sbt.File], sbt.TaskKey[sbt.Keys.TaskStreams])
dalvikTask := (assembly, streams) map {(asm, s) =>
[error] Type error in expression
It seems to me that im missing something quite fundamental in the syntax. Im a complete noob in scala language and im using it only for this particular project. Could someone kindly guide me in rectifying this error? I would be happy to provide any further clarifications, if needed.
This seems to be about this project:
https://github.com/wuhx/setpropex
It seems like the build system was written with an old sbt version in mind. Try creating a project/build.properties file with the content sbt.version=0.13.18.
Or you can migrate to the newer sbt 1.0 syntax:
https://www.scala-sbt.org/1.x/docs/Migrating-from-sbt-013x.html#Migrating+from+the+tuple+enrichments . However for this to work you'll probably also need to upgrade your sbt plugins as the plugin API changed after sbt 0.13.
I am using Play (2.6) with Slick, my database connection times out every second or third try. The only way to get the connection up again is by restarting the app with sbt run. It's driving me crazy, any help appreciated.
To be clear, I'm using Slick with a local MySQL database for very lightweight usage.
build.sbt
libraryDependencies += "mysql" % "mysql-connector-java" % "8.0.11"
libraryDependencies += "com.typesafe.slick" %% "slick" % "3.2.3"
libraryDependencies += "com.typesafe.slick" %% "slick-hikaricp" % "3.2.3"
application.conf
# db connections = ((physical_core_count * 2) + effective_spindle_count)
fixedConnectionPool = 5
repository.dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = ${fixedConnectionPool}
}
}
Error message:
java.sql.SQLTransientConnectionException: <dbconfig> - Connection is not available, request timed out after 1001ms.
Stack Trace:
com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:548)
com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:145)
com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:83)
slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:14)
slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
Update
I didn't include the specific db config in application.conf:
<name> {
slick.driver = scala.slick.driver.MySQLDriver
driver = "com.mysql.cj.jdbc.Driver"
url = __
user = __
password = __
}
I've added to the config:
keepAliveConnection = true
connectionPool = disabled
And it's working fine.
I have a public available Amazon s3 resource (text file) and want to access it from spark. That means - I don't have any Amazon credentials - it works fine if I want to just download it:
val bucket = "<my-bucket>"
val key = "<my-key>"
val client = new AmazonS3Client
val o = client.getObject(bucket, key)
val content = o.getObjectContent // <= can be read and used as input stream
However, when I try to access the same resource from spark context
val conf = new SparkConf().setAppName("app").setMaster("local")
val sc = new SparkContext(conf)
val f = sc.textFile(s"s3a://$bucket/$key")
println(f.count())
I receive the following error with stacktrace:
Exception in thread "main" com.amazonaws.AmazonClientException: Unable to load AWS credentials from any provider in the chain
at com.amazonaws.auth.AWSCredentialsProviderChain.getCredentials(AWSCredentialsProviderChain.java:117)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3521)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2653)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:92)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2687)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:371)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221)
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1781)
at org.apache.spark.rdd.RDD.count(RDD.scala:1099)
at com.example.Main$.main(Main.scala:14)
at com.example.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
I don't want to provide any AWS credentials - I just want to access resource anonymously (for now) - how to achieve this? I probably need to make it use something like AnonymousAWSCredentialsProvider - but how to put it inside spark or hadoop?
P.S. My build.sbt just in case
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.4.1",
"org.apache.hadoop" % "hadoop-aws" % "2.7.1"
)
UPDATED: After I did some investigations - I see the reason why itsn't working.
First of all, S3AFileSystem creates AWS client with the following order of credentials:
AWSCredentialsProviderChain credentials = new AWSCredentialsProviderChain(
new BasicAWSCredentialsProvider(accessKey, secretKey),
new InstanceProfileCredentialsProvider(),
new AnonymousAWSCredentialsProvider()
);
"accessKey" and "secretKey" values are taken from the spark conf instance (keys must be "fs.s3a.access.key" and "fs.s3a.secret.key" or org.apache.hadoop.fs.s3a.Constants.ACCESS_KEY and org.apache.hadoop.fs.s3a.Constants.SECRET_KEY constants, which is more convenient).
Second - you probably see that AnonymousAWSCredentialsProvider is the third option (last priority) - what could possible be wrong with that? See the implementation of AnonymousAWSCredentials:
public class AnonymousAWSCredentials implements AWSCredentials {
public String getAWSAccessKeyId() {
return null;
}
public String getAWSSecretKey() {
return null;
}
}
It simply returns null for both access key and secret key. Sounds reasonable. But look inside AWSCredentialsProviderChain:
AWSCredentials credentials = provider.getCredentials();
if (credentials.getAWSAccessKeyId() != null &&
credentials.getAWSSecretKey() != null) {
log.debug("Loading credentials from " + provider.toString());
lastUsedProvider = provider;
return credentials;
}
It doesn't choose provider in case both keys are null - that means anonymous credentials can't work. Looks like a bug inside aws-java-sdk-1.7.4. I tried to use latest version - but it's incompatible with hadoop-aws-2.7.1.
Any other ideas?
I personally never accessed public data from Spark. You can try to use dummy credentials, or to create ones just for this usage. Set them directly on the SparkConf object.
val sparkConf: SparkConf = ???
val accessKeyId: String = ???
val secretAccessKey: String = ???
sparkConf.set("spark.hadoop.fs.s3.awsAccessKeyId", accessKeyId)
sparkConf.set("spark.hadoop.fs.s3n.awsAccessKeyId", accessKeyId)
sparkConf.set("spark.hadoop.fs.s3.awsSecretAccessKey", secretAccessKey)
sparkConf.set("spark.hadoop.fs.s3n.awsSecretAccessKey", secretAccessKey)
As an alternative, read the documentation of DefaultAWSCredentialsProviderChain to see where the credentials are looked for. The list (order is important) is:
Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
Java System Properties - aws.accessKeyId and aws.secretKey
Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI
Instance profile credentials delivered through the Amazon EC2 metadata service
This is what helped me:
val session = SparkSession.builder()
.appName("App")
.master("local[*]")
.config("fs.s3a.aws.credentials.provider", "org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider")
.getOrCreate()
val df = session.read.csv(filesFromS3:_*)
Versions:
"org.apache.spark" %% "spark-sql" % "2.4.0",
"org.apache.hadoop" % "hadoop-aws" % "2.8.5",
Documentation:
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authentication_properties
It seems you can now use the aws.credentials.provider config key to use anonymous access given by org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider, which correctly special case the anonymous provider. However, you need a newer hadoop-aws than 2.7, which means you also need a spark installation without a bundled hadoop.
Here is how I did it colab:
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.3.1/spark-2.3.1-bin-without-hadoop.tgz
!tar xf spark-2.3.1-bin-without-hadoop.tgz
!pip install -q findspark
!pip install -q pyarrow
Now we install hadoop on the side, and set the output of hadoop classpath to SPARK_DIST_CLASSPATH, so spark can see it.
import os
!wget -q http://mirror.nbtelecom.com.br/apache/hadoop/common/hadoop-2.8.4/hadoop-2.8.4.tar.gz
!tar xf hadoop-2.8.4.tar.gz
os.environ['HADOOP_HOME']= '/content/hadoop-2.8.4'
os.environ["SPARK_DIST_CLASSPATH"] = "/content/hadoop-2.8.4/etc/hadoop:/content/hadoop-2.8.4/share/hadoop/common/lib/*:/content/hadoop-2.8.4/share/hadoop/common/*:/content/hadoop-2.8.4/share/hadoop/hdfs:/content/hadoop-2.8.4/share/hadoop/hdfs/lib/*:/content/hadoop-2.8.4/share/hadoop/hdfs/*:/content/hadoop-2.8.4/share/hadoop/yarn/lib/*:/content/hadoop-2.8.4/share/hadoop/yarn/*:/content/hadoop-2.8.4/share/hadoop/mapreduce/lib/*:/content/hadoop-2.8.4/share/hadoop/mapreduce/*:/content/hadoop-2.8.4/contrib/capacity-scheduler/*.jar"
Then we do like in https://mikestaszel.com/2018/03/07/apache-spark-on-google-colaboratory/, but add s3a and anonymous reading support, which is what the question is about.
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.3.1-bin-without-hadoop"
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.amazonaws:aws-java-sdk:1.10.6,org.apache.hadoop:hadoop-aws:2.8.4 --conf spark.sql.execution.arrow.enabled=true --conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider pyspark-shell'
And finally we can create the session.
import findspark
findspark.init()
from pyspark.sql import SparkSession
spark = SparkSession.builder.master("local[*]").getOrCreate()
This is very perplexing. If I remove the 3rd example in the code below, JUnit runner in Eclipse shows the test results in the usual hierarchy. As soon as the third example is added, all 3 tests drop out into Unrooted Tests category.
import org.junit.runner.RunWith
import org.specs2.mutable.Specification
import org.specs2.specification.AllExpectations
#RunWith(classOf[org.specs2.runner.JUnitRunner])
class ThreeTests extends Specification with AllExpectations {
"My Repository" should {
"do x" in {
1 === 1
}
"do y" in {
1 === 1
}
"do z" in {
1 === 1
}
}
}
ScalaIDE: 3.0.1-vfinal-20130718-1727-Typesafe
Eclipse SDK Version: 3.7.2
Specs2: "org.scalatra" %% "scalatra-specs2" % 2.2.2
You should use the latest specs2 version: "org.specs2" %% "specs2" % "2.3.12"
I’m working my way through the Lift Application Development Cookbook by Gilberto T. Garcia Jr and have run up against a problem I can’t seem to resolve. I’ve copied the source code Chap06-map-table and I’m trying to modify it to work with my IBM i (iSeries, AS/400, i5) database. I was able to make it work with the first type of connection using Squeryl Record. However, I can’t seem to figure how to get this to work using a JNDI Datasource. I’ve spent a couple of days searching the internet for examples of setting this up and have not found a good example involving a DB/400 database connection. Below is the error I get when I attempt to start the container and the code I’ve modified in an effort to make it work. Any help would be appreciated.
There seems to be some choices for the data source class from jt4oo.jar (jtOpen) and I’m not sure which would be the best to use or perhaps there’s another. I’ve been trying this with each of the three and am assuming the first is the correct one.
com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource
com.ibm.as400.access.AS400JDBCConnectionPoolDataSource
com.ibm.as400.access.AS400JDBCDataSource
Thanks. Bob
This is the start of the error:
> container:start
[info] jetty-8.0.4.v20111024
[info] No Transaction manager found - if your webapp requires one, please config
ure one.
[info] NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet
[info] started o.e.j.w.WebAppContext{/,[file:/C:/Users/Bob/Lift26Projects/scala_
210/chap06-map-table/src/main/webapp/]}
[info] started o.e.j.w.WebAppContext{/,[file:/C:/Users/Bob/Lift26Projects/scala_
210/chap06-map-table/src/main/webapp/]}
18:21:47.062 [pool-7-thread-1] ERROR n.liftweb.http.provider.HTTPProvider - Fail
ed to Boot! Your application may not run properly
java.sql.SQLException: The application requester cannot establish the connection
. ("jdbc:as400://www.busapp.com;libraries=PLAY2TEST";naming=system;errors=full;)
at com.ibm.as400.access.JDError.throwSQLException(JDError.java:524) ~[jt
400-6.7.jar:JTOpen 6.7]
at com.ibm.as400.access.AS400JDBCConnection.setProperties(AS400JDBCConne
ction.java:3142) ~[jt400-6.7.jar:JTOpen 6.7]
at com.ibm.as400.access.AS400JDBCManagedDataSource.createPhysicalConnect...
My Build.sbt File:
name := "Lift 2.5 starter template"
version := "0.0.1"
organization := "net.liftweb"
scalaVersion := "2.10.0"
resolvers ++= Seq("snapshots" at "http://oss.sonatype.org/content/repositories/snapshots",
"staging" at "http://oss.sonatype.org/content/repositories/staging",
"releases" at "http://oss.sonatype.org/content/repositories/releases"
)
seq(com.github.siasia.WebPlugin.webSettings :_*)
unmanagedResourceDirectories in Test <+= (baseDirectory) { _ / "src/main/webapp" }
scalacOptions ++= Seq("-deprecation", "-unchecked")
env in Compile := Some(file("./src/main/webapp/WEB-INF/jetty-env.xml") asFile)
libraryDependencies ++= {
val liftVersion = "2.5"
Seq(
"net.liftweb" %% "lift-webkit" % liftVersion % "compile",
"net.liftmodules" %% "lift-jquery-module_2.5" % "2.3",
"org.eclipse.jetty" % "jetty-webapp" % "8.0.4.v20111024" % "container",
"org.eclipse.jetty" % "jetty-plus" % "8.0.4.v20111024" % "container",
"ch.qos.logback" % "logback-classic" % "1.0.6",
"org.specs2" %% "specs2" % "1.14" % "test",
"net.liftweb" %% "lift-squeryl-record" % liftVersion % "compile",
"net.sf.jt400" % "jt400" % "6.7",
"org.liquibase" % "liquibase-maven-plugin" % "3.0.2"
)
}
This is my boot.scala file:
package bootstrap.liftweb
import _root_.liquibase.database.DatabaseFactory
import _root_.liquibase.database.jvm.JdbcConnection
import _root_.liquibase.exception.DatabaseException
import _root_.liquibase.Liquibase
import _root_.liquibase.resource.FileSystemResourceAccessor
import net.liftweb._
import util._
import Helpers._
import common._
import http._
import sitemap._
import Loc._
import net.liftmodules.JQueryModule
import net.liftweb.http.js.jquery._
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import java.sql.{SQLException, DriverManager}
import org.squeryl.adapters.DB2Adapter
import javax.naming.InitialContext
import javax.sql.DataSource
import code.model.LiftBookSchema
/**
* A class that's instantiated early and run. It allows the application
* to modify lift's environment
*/
class Boot {
def runChangeLog(ds: DataSource) {
val connection = ds.getConnection
try {
val database = DatabaseFactory.getInstance().
findCorrectDatabaseImplementation(new JdbcConnection(connection))
val liquibase = new Liquibase(
"database/changelog/db.changelog-master.xml",
new FileSystemResourceAccessor(),
database
)
liquibase.update(null)
} catch {
case e: SQLException => {
connection.rollback()
throw new DatabaseException(e)
}
}
}
def boot {
// where to search snippet
LiftRules.addToPackages("code")
prepareDb()
// Build SiteMap
val entries = List(
Menu.i("Home") / "index", // the simple way to declare a menu
// more complex because this menu allows anything in the
// /static path to be visible
Menu(Loc("Static", Link(List("static"), true, "/static/index"),
"Static Content")))
// set the sitemap. Note if you don't want access control for
// each page, just comment this line out.
LiftRules.setSiteMap(SiteMap(entries: _*))
//Show the spinny image when an Ajax call starts
LiftRules.ajaxStart =
Full(() => LiftRules.jsArtifacts.show("ajax-loader").cmd)
// Make the spinny image go away when it ends
LiftRules.ajaxEnd =
Full(() => LiftRules.jsArtifacts.hide("ajax-loader").cmd)
// Force the request to be UTF-8
LiftRules.early.append(_.setCharacterEncoding("UTF-8"))
// Use HTML5 for rendering
LiftRules.htmlProperties.default.set((r: Req) =>
new Html5Properties(r.userAgent))
//Init the jQuery module, see http://liftweb.net/jquery for more information.
LiftRules.jsArtifacts = JQueryArtifacts
JQueryModule.InitParam.JQuery = JQueryModule.JQuery172
JQueryModule.init()
}
def prepareDb() {
Class.forName("com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource")
val ds = new InitialContext().lookup("java:/comp/env/jdbc/dsliftbook").asInstanceOf[DataSource]
runChangeLog(ds)
SquerylRecord.initWithSquerylSession(
Session.create(
ds.getConnection,
new DB2Adapter)
)
}
}
This is my jetty-env-xml File
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
<New id="dsliftbook" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg></Arg>
<Arg>jdbc/dsliftbook</Arg>
<Arg>
<New class="com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource">
<Set name="serverName">"jdbc:as400://www.[server].com;libraries=PLAY2TEST";naming=system;errors=full;</Set>
<Set name="user">[user]</Set>
<Set name="password">[password]</Set>
</New>
</Arg>
</New>
</Configure>
Okay, I've managed to get connected. One problem was the quotation marks in the jetty-env-xml file. And the user name/password I was using apparently did not the authority required to make this work I'm not sure why since this is the same id/password I use for all my iSeries development. So for now, I'm another user profile with security officer authority until I can figure out what's happening or what authorities are required.
Once I got signed on, I was not able to set a library list for the user and this was causing the SQL to fail. It was looking for a library name that was the same as the user ID. For the time being, I've gotten around this issue by creating a new library named the same as the user id.
One other problem here is that even though I'm supplying both the ID and Password, I'm getting prompted to enter the ID/Password before it will connect. The ID and url are filled in but the password always has to be re-keyed.
I've included the current source for the jetty-env-xml file and the boot.scala file. Hopefully this may help others.
Thanks to Dave and James for their help!
Bob
boot.scala:
package bootstrap.liftweb
// import _root_.liquibase.database.DatabaseFactory
// import _root_.liquibase.database.jvm.JdbcConnection
// import _root_.liquibase.exception.DatabaseException
// import _root_.liquibase.Liquibase
// import _root_.liquibase.resource.FileSystemResourceAccessor
import net.liftweb._
import util._
import Helpers._
import common._
import http._
import sitemap._
import Loc._
import net.liftmodules.JQueryModule
import net.liftweb.http.js.jquery._
import net.liftweb.squerylrecord.SquerylRecord
import org.squeryl.Session
import java.sql.{SQLException, DriverManager}
import org.squeryl.adapters.DB2Adapter
import javax.naming.InitialContext
import javax.sql.DataSource
import code.model.LiftBookSchema
import com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource
/**
* A class that's instantiated early and run. It allows the application
* to modify lift's environment
*/
class Boot {
// def runChangeLog(ds: DataSource) {
// val connection = ds.getConnection
// try {
// val database = DatabaseFactory.getInstance().
// findCorrectDatabaseImplementation(new JdbcConnection(connection))
// val liquibase = new Liquibase(
// "database/changelog/db.changelog-master.xml",
// new FileSystemResourceAccessor(),
// database
// )
// liquibase.update(null)
// } catch {
// case e: SQLException => {
// connection.rollback()
// throw new DatabaseException(e)
// }
// }
// }
def boot {
// where to search snippet
LiftRules.addToPackages("code")
prepareDb()
// Build SiteMap
val entries = List(
Menu.i("Home") / "index", // the simple way to declare a menu
// more complex because this menu allows anything in the
// /static path to be visible
Menu(Loc("Static", Link(List("static"), true, "/static/index"),
"Static Content")))
// set the sitemap. Note if you don't want access control for
// each page, just comment this line out.
LiftRules.setSiteMap(SiteMap(entries: _*))
//Show the spinny image when an Ajax call starts
LiftRules.ajaxStart =
Full(() => LiftRules.jsArtifacts.show("ajax-loader").cmd)
// Make the spinny image go away when it ends
LiftRules.ajaxEnd =
Full(() => LiftRules.jsArtifacts.hide("ajax-loader").cmd)
// Force the request to be UTF-8
LiftRules.early.append(_.setCharacterEncoding("UTF-8"))
// Use HTML5 for rendering
LiftRules.htmlProperties.default.set((r: Req) =>
new Html5Properties(r.userAgent))
//Init the jQuery module, see http://liftweb.net/jquery for more information.
LiftRules.jsArtifacts = JQueryArtifacts
JQueryModule.InitParam.JQuery = JQueryModule.JQuery172
JQueryModule.init()
}
def prepareDb() {
Class.forName("com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource")
val ds = new InitialContext().lookup("java:/comp/env/jdbc/dsliftbook").asInstanceOf[DataSource]
// runChangeLog(ds)
SquerylRecord.initWithSquerylSession(Session.create(ds.getConnection, new DB2Adapter)
)
}
}
jetty-env-xml
<!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd">
<Configure class="org.eclipse.jetty.webapp.WebAppContext">
<New id="dsliftbook" class="org.eclipse.jetty.plus.jndi.Resource">
<Arg></Arg>
<Arg>jdbc/dsliftbook</Arg>
<Arg>
<New class="com.ibm.as400.access.AS400JDBCManagedConnectionPoolDataSource">
<Set name="serverName">www.[server].com</Set>
<Set name="user">DBUSER</Set>
<Set name="password">DBUSER</Set>
</New>
</Arg>
</New>
</Configure>