I am trying to connect my Scala application to a Postgres cluster consisting of one master node and 3 slaves/read replicas. My application.conf looks like this today:
slick {
dbs {
default {
driver = "com.company.division.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://"${?DB_ADDR}":"${?DB_PORT}"/"${?DB_NAME}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
}
}
Based on Postgres' documentation, I can define the master and slaves all in one JDBC URL, which will give me some failover capabilities, like this:
jdbc:postgresql://host1:port1,host2:port2/database
However, if I want to separate my connections by read and write capabilities, I have to define two JDBC URls, like this:
jdbc:postgresql://node1,node2,node3/database?targetServerType=master
jdbc:postgresql://node1,node2,node3/database?targetServerType=preferSlave&loadBalanceHosts=true
How can I define two JDBC URLs within Slick? Should I define two separate entities under slick.dbs, or can my slick.dbs.default.db entity have multiple multiple URLs defined?
Found an answer from Daniel Westheide's blog post. To summarize, it can be done with a DB wrapper class and custom Effect types that provides specific rules to control where read-only queries are directed vs. write queries are directed.
Then your slick file would look like this:
slick {
dbs {
default {
driver = "com.yourdomain.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://"${?DB_PORT_5432_TCP_ADDR}":"${?DB_PORT_5432_TCP_PORT}"/"${?DB_NAME}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
readonly {
driver = "com.yourdomain.db.ExtendedPgDriver$"
db {
driver = "org.postgresql.Driver"
url = ${DB_READ_REPLICA_URL}
user = ${?DB_USERNAME}
password = ${?DB_PASSWORD}
}
}
}
}
And it's up to your DB wrapper class to route queries to either 'default' or 'readonly'
Related
One of my schema.prisma's file is wrote like this:
generator client {
provider = "prisma-client-js"
output = "./generated/own_database"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Employee {
...
}
And I have another one, which is like this:
generator client {
provider = "prisma-client-js"
output = "./generated/another_database"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL_2")
}
model Costs {
cod_center_cost Int #id
description String? #db.VarChar(100)
status Boolean?
classification Int?
}
...
I have a model that needs to have a relation to another model, but how can I refer an another file's model?
Splitting Prisma's schema file is not officially possible yet.
However there are some community provided solutions like Aurora which provides you functionality to split prisma schemas and import them.
Here's the Official GitHub Issue where splitting schema file is being tracked by Prisma Team.
My code uses Slick 3.0. It has as a common db object.
object Common {
private [database] val db = Database.forURL(
url = // read from config,
user = // read from config,
password = // read from config
)
}
Then, in my database services object's, my methods look like:
private lazy val myTableQuery = TableQuery[MyTable]
def getTableObjects: Future[Seq[MyTableObject]] = {
val action = myTableQuery.result
Common.db.run(action)
}
where I'm re-using the Common.db throughout multiple services.
In Slick 3.0, what's the idiomatic way to run a DB call?
I saw in the Slick 2.0 docs that an implicit session can be used.
However, I'm not sure if what I'm doing is correct in Slick 3.0.
You no longer need an implicit session.
Currently mobile, check out the sample chapters of essential slick -http://underscore.io/training/courses/essential-slick/
It shows how to do it now.
I am one of the authors.
Jono
We are building an application, that is using a legacy framework utilising ADO.NET. This framework manages its own connection to the DB for calls to its code API.
For any customisations and custom tables we are using Entity Framework and hence a separate connection to the DB is made.
The application and DB is to be hosted on Azure.
What we would like to do is wrap both calls to the legacy framework and to Entity Framework into the same transaction.
Our understanding is that this is a distributed transaction, but this feature is not available in Azure.
Is there a way to make this to work in the Azure environment?
e.g.
using (var transaction = new TransactionScope())
{
using (var db = new EntityFrameworkDBEntities())
{
Order order = db.Orders.FirstOrDefault();
order.Name = "1";
db.SaveChanges();
}
using (var legacyAPI = new LegacyAPI())
{
Customer customer = legacyAPI.GetCustomers.FirstOrDefault();
customer.Name = "Charles";
legacyAPI.SaveCustomer(customer);
}
transaction.Complete();
}
AFAIK you need to use the same connection for your transaction since SQL Azure doesn't support distributed transaction. ADO.NET will upgrade to distributed transaction if you utilizes multiple connections in the same transaction even though all of them are connected to the same database.
As Shaun Xu says, you need to use just one connection. If you are able to change your LegacyAPI to take an open connection and a transaction as input, here is how, using EF6 and edmx:
var workspace = new MetadataWorkspace(new[] { "res://*/" }, new[] { Assembly.GetExecutingAssembly() });
using (var connection = new SqlConnection("Normal ADO connection string with MultipleActiveResultSets=True"))
{
using (var entityConnection = new EntityConnection(workspace, connection, false))
{
connection.Open();
using (var transaction = connection.BeginTransaction())
{
using (var db = new EntityFrameworkDBEntities(entityConnection))
{
db.Database.UseTransaction(transaction);
// Do stuff with db
db.SaveChanges();
}
// Do ADO stuff on LegacyAPI using the connection and transaction objects
transaction.Commit();
}
}
}
To obtain the extra constructor on your dbcontext, you make this partial class, where false indicates that you open and close the connection manually.
partial class EntityFrameworkDBEntities
{
public EntityFrameworkDBEntities(DbConnection connection) : base(connection, false) { }
}
As a bonus you now only need one connection string in your config and it doesn't include all the useless EF junk that normally comes with it (metadata=res://*/blabla).
This also works if, say, you have a database with multiple schemas and an edmx for each. Note that although the EntityConnections are identical, you need one for each dbcontext.
Do I have to have a domain object to query mongodb?
What if I just want some raw data to be displayed? What would be the syntax to query mongodb from my controller?
I tried
"def var = db.nameOfMyCollection.find()"
but it says no such property as db in my controller class.
I know that my application is connecting to the database because I am monitoring mongo server log and it increases the number of connections by one when I launch my grails app.
Assuming you have added mongodb java driver dependency in build config and refreshed your dependencies.
Create a grails service named MongoService.groovy and put the following code.
Dont forget to import mongodb
package com.organisation.project
import com.mongodb.*
class MongoService {
private static MongoClient mongoClient
private static host = "localhost" //your host name
private static port = 27017 //your port no.
private static databaseName = "your-mongo-db-name"
public static MongoClient client() {
if(mongoClient == null){
return new MongoClient(host,port)
}else {
return mongoClient
}
}
public DBCollection collection(collectionName) {
DB db = client().getDB(databaseName)
return db.getCollection(collectionName)
}
}
We can now use this MongoService in our controllers or other services.
Now you can do following stuff in your controller.
Dont forget to import mongodb.DBCursor
package com.organisation.project
import com.mongodb.DBCursor
class YourControllerOrService {
def mongoService //including Mongo service
def method(){
def collection = mongoService.collection("your-collection-name")
DBCursor cursor = collection.find()
try{
while(cursor.hasNext()){
def doc = cursor.next()
println doc //will print raw data if its in your database for that collection
}
}finally {
cursor.close()
}
}
}
For more info Refer mongodb java docs
Ok, solved.
This is how you go about accessing the database.
import com.mongodb.*
MongoClient mongoClient = new MongoClient("localhost", 27017)
DB db = mongoClient.getDB("db");
I actually solved it using Java and then pasted it into groovy and it works there as well which shouldn't come as a surprise. The difference is that in Java you actually have to import the jar driver, but in Grails, you install the Mongo GORM plugin.
Assuming you are using the MongoDB GORM Plugin, if you have domain classes in your grails application, you can use them as you would with any relational db backend.
However, per this documentation, you can access the low-level Mongo API in any controller or service by first declaring a property mongo, just as you would a service, then getting the database you are targeting:
def mongo
def myAction = {
def db = mongo.getDB("mongo")
db.languages.insert([name: 'Groovy'])
}
I tried using MongoDB 2.0.6 to replace MySQL 5.5.25 for a test Grails 2.1 App and am encountering some strange problems.
Issues when using MongoDB but not MySQL:
When using Scaffolding, I cannot get the fields to order by using static constraints
When I specify inList as a constraint, I get a drop-down when using a MySQL backend, but a field when using a MongoDB backend.
No * (asterisk) on fields where blank=false constraint specified.
Domain Class:
package study
class Student {
String login
String firstName
String lastName
String gender
Boolean active
Date dateCreated
Date lastUpdated
static constraints = {
login()
firstName(blank: false)
lastName(blank: false)
gender(inList: ['M', 'F'])
active()
}
}
Controller
package study
class StudentController {
def scaffold = true
}
DataSource.groovy (MySQL stuff commented out):
grails {
mongo {
host = "dev-linux"
port = 27017
username = "study"
password= "********"
databaseName = "study"
}
}
//dataSource {
// pooled = true
// driverClassName = "com.mysql.jdbc.Driver"
// dialect = "org.hibernate.dialect.MySQL5InnoDBDialect"
// username = "study"
// password = "********"
// dbCreate = "create-drop" // one of 'create', 'create-drop','update'
// url = "jdbc:mysql://dev-linux:3306/study"
//
//}
//hibernate {
// cache.use_second_level_cache = true
// cache.use_query_cache = true
// cache.provider_class = "net.sf.ehcache.hibernate.EhCacheProvider"
//}
BuildConfig.groovy (plugins section shown was all I changed to put MongoDB in place of MySQL, the remainder of this file is the default created by Grails)
plugins {
// build ":hibernate:$grailsVersion"
// compile ":mysql-connectorj:5.1.12"
compile ":mongodb:1.0.0.GA"
build ":tomcat:$grailsVersion"
}
The only changes I made to put in MongoDB and take out MySQL is the changes to the DataSource.groovy and BuildConfig.groovy shown above.
Is there any configuration item that I am missing?
I did see someone mention on this Nabble forum post that the field ordering may be an issue with MongoDB.
However, this post did not have any details.
Also, I did not understand why or how the back end Database engine could impact how the view is rendered when using scaffolding. Specifically, the ordering on a page and drop-down vs textfield.
I would have thought that would come from the Domain Class's field types and constraints.
Has anyone come across this odd behavior when using Grails+Scaffolding with MongoDB before? Does anyone know of a fix or have any insight?
Thank you very much in advance, I appreciate it.
Scaffolding with MongoDB works, the problem is if you just install mongodb plugin, grails will see ambiguous domain mappings and errors like these pop up. You need to either:
Remove hibernate plugin like this:
grails uninstall-plugin hibernate
Also remove these lines from BuildConfig.groovy:
runtime ":database-migration:1.1"
runtime ":hibernate:$grailsVersion"
Explicitly tell a given domain is persisted by Mongo by adding this line to it:
static mapWith="mongo"