Scala Play Slick Connection multiple Schema - scala

this is my code
Application.conf
slick.dbs.default.driver="com.typesafe.slick.driver.oracle.OracleDriver$"
slick.dbs.default.db.driver=oracle.jdbc.driver.OracleDriver
slick.dbs.default.db.url="jdbc:oracle:thin:#XXXXXXX"
slick.dbs.default.db.user=param
slick.dbs.default.db.password="xxxx"
slick.dbs.default.driver="com.typesafe.slick.driver.oracle.OracleDriver$"
slick.dbs.default.db.driver=oracle.jdbc.driver.OracleDriver
slick.dbs.default.db.url="jdbc:oracle:thin:#XXXXXXX"
slick.dbs.default.db.user=param2
slick.dbs.default.db.password="xxxx"
how to connect multiple schema scala play slick oracle ????

With slick.dbs.default.*, you configurate your default schema.
If you want to have multiple database connections, you can declare named databases.
Try to use something like this in your configuration:
oracle2.driver="com.typesafe.slick.driver.oracle.OracleDriver$"
oracle2.db.driver=oracle.jdbc.driver.OracleDriver
oracle2.db.url="jdbc:oracle:thin:#XXXXXXX"
oracle2.db.user=param2
oracle2.db.password="xxxx"
By default, the default database connection is used. If you'd like to use your other databases, in this case oracle2, you can inject them using the NamedDatabase annotation.
#NamedDatabase("oracle2") override protected val dbConfigProvider: DatabaseConfigProvider

Related

Import different db drivers in Slick

Slick 3 has "import api" to use specific database driver. e.g.
import slick.driver.H2Driver.api._
...DAO implementation...
or
import slick.driver.PostgresDriver.api._
...DAO implementation...
How do I use postgresql in production and h2 in unit test?
Use DatabaseConfig instead. As Slick documentation states:
On top of the configuration syntax for Database, there is another
layer in the form of DatabaseConfig which allows you to configure a
Slick driver plus a matching Database together. This makes it easy to
abstract over different kinds of database systems by simply changing a
configuration file.
Instead of importing database specific drivers, first obtain a DatabaseConfig:
val dbConfig = DatabaseConfig.forConfig[JdbcProfile]("<db_name>")
And then import api from it:
import dbConfig.driver.api._

Creating rdbms DDL from scala classes

Is there a straightforward way to generate rdbms ddl, for a set of scala classes?
I.e. to derive a table ddl for each class (whereby each case class field would translate to field of the table, with a corresponding rdbms type).
Or, to directly create the database objects in the rdbms.
I have found some documentation about Ebean being embedded in Play framework, but was not sure what side-effects may enabling Ebean in play have, and how much taming would Ebean require to avoid any of them. I have never even used Ebean before...
I would actually rather use something outside of Play, but if it's simple to accomplish in Play I would dearly like to know a clean way. Thanks in advance!
Is there a straightforward way to generate rdbms ddl, for a set of
scala classes?
Yes
Ebean
Ebean a default orm provided by play you just have to create entity and enable evolution(which is set to enable as default).It will create a (dot)sql file in conf/evolution/default directory and when you hit localhost:9000 it will show you apply script .But your tag say you are using scala so you can't really use EBean with Scala .If you do that you will have to
sacrifice the immutability of your Scala class, and to use the Java
collections API instead of the Scala one.
Using Scala this way will just bring more troubles than using Java directly.
Source
JPA
JPA (using Hibernate as implementation) is the default way to access and manage an SQL database in a standard Play Java application. It is still possible to use JPA from a Play Scala application, but it is probably not the best way, and it should be considered as legacy and deprecated.Source
Anorm(if you want to write ddl)
Anorm is Not an Object Relational Mapper so you have to manually write ddl. Source
Slick
Function relation mapping for scala .Source
Activate
Activate is a framework to persist objects in Scala.Source
Skinny
It is built upon ScalikeJDBC library which is a thin but powerful JDBC wrapper.Details1,Details2
Also check RDBMS with scala,Best data access option for play scala

Scalatest mocking a db

I am pretty new to using Scala/Scalatest and I am trying to write a few test cases that mock a db.
I have a function called FindInDB(entry : String) that checks if "entry" is in the db, like so:
entry match {
case `entry` =>
if(db.table contains entry) {
true
}
false
}
FindInDB is called in another function, which is defined in a class called Service.
I want to be able to mock the db.table part. From reading scalatest I know I could mock the class that FindInDB is defined and control what the function that calls FindInDB returns, but I want to test the FindInDB function itself and control what is in db.table through mock
You can use DB mockup framework such as jOOQ, or my framework Acolyte. Acolyte can mock DB at JDBC level, for any project based one JDBC directly or indirectly (e.g. JPA, EJB, Anorm, Slick): you describe for each test case which JDBC result (resultset, update count, error) is for which statement.
It allows to mockup exactly the same JDBC data would be exchanges by your app/lib with expected DB, with many advantages for testing: unit isolation, simplicity (no need to setup/tear down test DB with fixtures).
Documentation is online at http://acolyte.eu.org/ .
There is a Scala DSL which is easily usable for testing (examples with specs are available in documentation).

playframework2 how to open multi-datasource configuration with jpa

i want to configure multiple datasources in Play framework 2.1 with jpa.
one is H2, and the other is Oracle.
so i added the code like this in application.conf:
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:E:/myproject/setup/db/monitor"
db.default.user=sa
db.default.password=sa
db.default.jndiName=DefaultDS
jpa.default=defaultPersistenceUnit
db.oracle.driver=oracle.jdbc.driver.OracleDriver
db.oracle.url="jdbc:oracle:thin:#10.1.20.10:1521:prjct"
db.oracle.user=LOG_ANALYSE
db.oracle.password=LOG_ANALYSE
db.oracle.jndiName=OracleDS
jpa.oracle=ojdbcPersistenceUnit
i don't know how to assign for jpa.oracle and give it a meaningless name. but it does not show any errors. should i change it and how?
the main problem is: how can i tell Play which entities are managed by default datasource what others by the other, oracle?
for example, class A, B's tables are in H2 and class C, D's tables are in oracle. what should i codding for these entities to assign the datasources?
Finally, i found the way to connect to different db sources.
in play, the api of jpa has no method named getJPAConfig("").
thers is another construction of em(), em("").
so i access the dbs as:
EntityManager em0 = JPA.em("default");
EntityManager em1 = JPA.em("oracle");
that's it!
I did not used this feature (yet) but you have to useone of the annotations on your Models:
#PersistenceUnit(name="default")
#PersistenceUnit(name="oracle")
Or when you query yourself you can alsow specify it as:
EntityManager em = JPA.getJPAConfig("oracle").em();

Setting default schema for Vertica Database

I am building a web application using Play! with Vertica database as back-end. The JDBC connection string for Vertica contains the server and database name, but my tables are under a specific schema (say "dev_myschema"). Thus, I should refer to my table as "dev_myschema.mytable". There is an exact copy of all these tables in a production schema as well (say "prod_myschema") with real data.
I would like to set this schema name in the configuration file so that it is easy to switch between these two schema. For now, I have a getConnection method in a helper class, that does DB.getConnection() and sets the configured schema as the default schema for that connection object. However, the same does not help in other model classes where it is mentioned along with its Entity annotation (#Entity #Table(name=dev_myschema.mytable))
Is there a way by which I can specify the schema name in the configuration file and have it read by the connection method as well as the model annotations?
Thanks.
Eugene got it almost correct, but was missing an underscore. The correct Vertica SQL syntax to set the default schema is:
set search_path to dev_myschema
As Eugene suggested, if you are using low-level JDBC, as soon as you create your Connection object you can do:
conn.createStatement().executeUpdate("set search_path to " + schemaName);
As far as I'm aware (and I just scanned the 4.1.7 documentation), there is no way as of yet to set a schema as the default.
according to the sql guide the default schema is the first one found in your search tree. maybe you could exploit that and make sure your copy is found first.
They way I handle this issue is by executing a "set search path" command if I am using my development schema. So, as soon as your Vertica connection object is created, execute the following command:
"set search path to dev_myschema"
In my application code, I just have my Vertica object check an environment/config variable, and if the "dev schema" setting is present, it executes that statement upon establishing the connection. My production config doesn't have that setting, so it will just use the default schema in that case and not incur the additional overhead of executing that statement every time.
In 7.0, admin can set it at user level by issuing below command:
alter user user_name search_path schema1,schema2;