Slick 3 has "import api" to use specific database driver. e.g.
import slick.driver.H2Driver.api._
...DAO implementation...
or
import slick.driver.PostgresDriver.api._
...DAO implementation...
How do I use postgresql in production and h2 in unit test?
Use DatabaseConfig instead. As Slick documentation states:
On top of the configuration syntax for Database, there is another
layer in the form of DatabaseConfig which allows you to configure a
Slick driver plus a matching Database together. This makes it easy to
abstract over different kinds of database systems by simply changing a
configuration file.
Instead of importing database specific drivers, first obtain a DatabaseConfig:
val dbConfig = DatabaseConfig.forConfig[JdbcProfile]("<db_name>")
And then import api from it:
import dbConfig.driver.api._
Related
I have a SDU Model created and I need import this model in other collection. The documentation explain that is necessary add 1 document and then import the model.
https://cloud.ibm.com/docs/services/discovery?topic=discovery-sdu#import
The problem is that my collection is connected with the object storage service with more than 1000 documents and not is possible add only one document and then import the model
I imported the model but SDU doesn't recognize my model. Is possible import a model with this type of connection?
Thanks.
After doing the initial crawl of Object Storage you should get the option to import the model. If you are getting an error importing please post it here for further debugging. (I am an IBM Watson employee)
this is my code
Application.conf
slick.dbs.default.driver="com.typesafe.slick.driver.oracle.OracleDriver$"
slick.dbs.default.db.driver=oracle.jdbc.driver.OracleDriver
slick.dbs.default.db.url="jdbc:oracle:thin:#XXXXXXX"
slick.dbs.default.db.user=param
slick.dbs.default.db.password="xxxx"
slick.dbs.default.driver="com.typesafe.slick.driver.oracle.OracleDriver$"
slick.dbs.default.db.driver=oracle.jdbc.driver.OracleDriver
slick.dbs.default.db.url="jdbc:oracle:thin:#XXXXXXX"
slick.dbs.default.db.user=param2
slick.dbs.default.db.password="xxxx"
how to connect multiple schema scala play slick oracle ????
With slick.dbs.default.*, you configurate your default schema.
If you want to have multiple database connections, you can declare named databases.
Try to use something like this in your configuration:
oracle2.driver="com.typesafe.slick.driver.oracle.OracleDriver$"
oracle2.db.driver=oracle.jdbc.driver.OracleDriver
oracle2.db.url="jdbc:oracle:thin:#XXXXXXX"
oracle2.db.user=param2
oracle2.db.password="xxxx"
By default, the default database connection is used. If you'd like to use your other databases, in this case oracle2, you can inject them using the NamedDatabase annotation.
#NamedDatabase("oracle2") override protected val dbConfigProvider: DatabaseConfigProvider
I have just started using Play 2.0 with Scala and Casbah for connecting to MongoDB. I have been able to connect to my MongoDB instance but what I am looking for is a way to be able to access the MongoClient from all my model classes.
Is there any DependencyInjection way to inject mongoClient in all Scala models ? or
Should I have one Scala object which initialises the MongoClient and use that object to refer to MongoClient in all my models ? or
Is there a more better way to do this ?
As MongoClient uses a connection pool internally, its optimal to only have a single instance for your application and that single object can then be used by all your models.
Also, you could look at Salat which might do what you require or give you an idea on how best to implement your own models.
I am pretty new to using Scala/Scalatest and I am trying to write a few test cases that mock a db.
I have a function called FindInDB(entry : String) that checks if "entry" is in the db, like so:
entry match {
case `entry` =>
if(db.table contains entry) {
true
}
false
}
FindInDB is called in another function, which is defined in a class called Service.
I want to be able to mock the db.table part. From reading scalatest I know I could mock the class that FindInDB is defined and control what the function that calls FindInDB returns, but I want to test the FindInDB function itself and control what is in db.table through mock
You can use DB mockup framework such as jOOQ, or my framework Acolyte. Acolyte can mock DB at JDBC level, for any project based one JDBC directly or indirectly (e.g. JPA, EJB, Anorm, Slick): you describe for each test case which JDBC result (resultset, update count, error) is for which statement.
It allows to mockup exactly the same JDBC data would be exchanges by your app/lib with expected DB, with many advantages for testing: unit isolation, simplicity (no need to setup/tear down test DB with fixtures).
Documentation is online at http://acolyte.eu.org/ .
There is a Scala DSL which is easily usable for testing (examples with specs are available in documentation).
All,
I'm importing a group of classes for sqlalchemy from a separate file. They define the tables on my DB (inheriting from declarative_base()), and were originally located in the same file as my engine and metadata creation.
Since I have quite a few tables, and each of them is complex, I don't want them located in the same file I'm using them in. It makes working in the file more unwieldy, and I want a more clear delineation, since the classes document the current schema.
I refactored them to their own file, and suddenly the metadata does not find them automatically. Following this link, I found that it was because my main file declares base:
from tables import address, statements
Base = declarative_base()
metadata = MetaData()
Base.metadata.create_all()
And so does my tables file:
Base = declarative_base()
class address(Base):
...
So, as far as I can tell they get separate "bases" which is why the metadata can't find and create the declared tables. I've done some googling and it looks like this should be possible, but there isn't any obvious way to go about it.
How do I import tables defined in a separate file?
Update:
I tried this, and it sort of functions.
In the tables file, declare a Base for the table classes to import:
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
Then in the main file, import the preexisting base and give its metadata to a new Base:
from sqlalchemy.ext.declarative import declarative_base
from tables import Base as tableBase
Base = declarative_base(metadata=tableBase.metadata)
After some more testing, I've found this approach leaves out important information. I've gone back to one file with everything in it, since that does work correctly. I'll leave the question open in case someone can come up with an answer that does work correctly. Or alternatively, point to the appropriate docs.