One of my evolution contains a very simple table definition with BIGSERIAL column:
CREATE TABLE product (
id BIGSERIAL NOT NULL PRIMARY KEY,
name VARCHAR NOT NULL,
color VARCHAR NOT NULL
);
I use SlickCodeGenerator to generate classes from the database itself:
case class ProductRow(id: Long, name: String, color: String)
implicit def GetResultProductRow(implicit e0: GR[Long], e1: GR[String]): GR[ProductRow] = GR{
prs => import prs._
ProductRow.tupled((<<[Long], <<[String], <<[String]))
}
class Product(_tableTag: Tag) extends Table[ProductRow](_tableTag, "product") {
def * = (id, name, color) <> (ProductRow.tupled, ProductRow.unapply)
def ? = (Rep.Some(id), Rep.Some(name), Rep.Some(color)).shaped.<>({r=>import r._; _1.map(_=> ProductRow.tupled((_1.get, _2.get, _3.get)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val id: Rep[Long] = column[Long]("id", O.AutoInc, O.PrimaryKey)
val name: Rep[String] = column[String]("name")
val color: Rep[String] = column[String]("color")
}
lazy val Product = new TableQuery(tag => new Product(tag))
I would like to insert a row to Product table without an id so it would be generated by database. The problem is ProductRow.id is not optional. Most of the solutions suggest adding new methods to Product class, however I must not touch sources generated by slick as the changes could be lost any time. Is there a way to insert the row without an ID using such generated files?
I use Slick 3.0, Playframework 2.4.1.
Edit: Sending dummy id solves the issue, however it is redundant over the wire. I am looking for something like: insert into product(name, color) values('name', 'color').
you can send any value as id in productRow, if its bigserial...it should be replaced by the autogenerated value...and even if you dont want to send id then create one case class without id having same fields as product to send :
case class ProductSimilar(name: String, color: String)
val prodSimilar =ProductSimilar("name","color")
before inserting copy the fields except the id field and you can insert to database :
val db: PostgresDriver.backend.DatabaseDef = Database.forURL(url, user=user, password=password, driver= jdbcDriver)
val row = Product(0L ,prodSimilar.name,prodSimilar.color)
db.run(query returning query.map(obj => obj) += row)
Hope this is helpful...
Related
For a scala project, I'm using play-slick with play-slick-evolutions both version 5.0.0 and I generate my db classes with slick-codegen version 3.3.3.
I have a table with a primary key column and some columns with default values. I want to insert one row without mentioning the primary key column nor any columns with default values. Ideally, this action should return the new primary key of the created row.
My problem is that the generated code from slick-codegen seems to only allow to insert full rows because it uses an own case class for the rows. This is how the generated code looks (without the comments):
case class SalesOrderRow(idSalesOrder: Int, fkCustomer: Int, createdAt: java.sql.Timestamp, createdBy: Option[String] = None)
implicit def GetResultSalesOrderRow(implicit e0: GR[Int], e1: GR[java.sql.Timestamp], e2: GR[Option[String]]): GR[SalesOrderRow] = GR{
prs => import prs._
SalesOrderRow.tupled((<<[Int], <<[Int], <<[java.sql.Timestamp], <<?[String]))
}
class SalesOrder(_tableTag: Tag) extends profile.api.Table[SalesOrderRow](_tableTag, Some("test"), "sales_order") {
def * = (idSalesOrder, fkCustomer, createdAt, createdBy) <> (SalesOrderRow.tupled, SalesOrderRow.unapply)
def ? = ((Rep.Some(idSalesOrder), Rep.Some(fkCustomer), Rep.Some(createdAt), createdBy)).shaped.<>({r=>import r._; _1.map(_=> SalesOrderRow.tupled((_1.get, _2.get, _3.get, _4)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val idSalesOrder: Rep[Int] = column[Int]("id_sales_order", O.AutoInc, O.PrimaryKey)
val fkCustomer: Rep[Int] = column[Int]("fk_customer")
val createdAt: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("created_at")
val createdBy: Rep[Option[String]] = column[Option[String]]("created_by", O.Length(20,varying=true), O.Default(None))
}
lazy val SalesOrder = new TableQuery(tag => new SalesOrder(tag))
With this generated code I could now insert a row with mentioning the full column:
val insertActionsNotSoNice =
DBIO.seq(
salesOrders += Tables.SalesOrderRow(0, 3, new Timestamp(System.currentTimeMillis()), Some("这个不好"))
)
But I want to omit the primary key at the beginning and the timestamp parameter that has a default value. But can't do something like this.
val insertActionsNotCompiling =
DBIO.seq(
salesOrders.map(so => (so.fkCustomer, so.createdBy) += (3, Some("这个好")))
)
I found many examples with something like the latter approach in the slick documentation and in the web but it was always the case that their classes used for the database did have a tuple instead of an own row class. Like...
class Coffees(tag: Tag) extends Table[(String, Int, Double, Int, Int)](tag, "COFFEES")
instead of
class Coffees(_tableTag: Tag) extends profile.api.Table[CoffeesRow](_tableTag, Some("test"), "coffee")
Do I have to throw the slick-codegen out of my project and write all the classes myself to fit my needs or is my composition of libraries with play-slick wrong? Or is there a simple trick to omit columns when inserting what isn't documented?
After some days of trying several things, I found one solution.
You can map over a query to select only the columns you want to instert and then use the returning if you want to return the id of the inserted row.
import dao.Tables.profile.api._
// ...
val salesOrders = TableQuery[SalesOrder]
val insertStatement = salesOrders.map(so => (so.fkCustomer, so.createdBy)) returning salesOrders.map(_.idSalesOrder) into ((_, id) => id)
Then you can write something like:
val newSalesOrderIdFuture = db.run(insertStatement += (5, Some("有效")))
So I omitted the primary key column (id_sales_order) and a timestamp column that defaults to now() in the database (created_at) in my insert statement.
I'm using Slick code-gen to output a very normal Tables.scala file, which maps the tables/columns of the structure of my database.
However i want to EXTEND the functionality of those tables in my DAOs and its proving to be impossible for me (roughly new to scala and play framework)
INSIDE Tables.scala INSIDE the class Meeting
you can write functions that have access to the columns I.E
class Meeting(_tableTag: Tag) extends profile.api.Table[MeetingRow](_tableTag, "meeting") {
def * = (id, dateandtime, endtime, organisationid, details, adminid, datecreated, title, agenda, meetingroom) <> (MeetingRow.tupled, MeetingRow.unapply)
def ? = (Rep.Some(id), Rep.Some(dateandtime), Rep.Some(endtime), Rep.Some(organisationid), Rep.Some(details), Rep.Some(adminid), Rep.Some(datecreated), Rep.Some(title), Rep.Some(agenda), Rep.Some(meetingroom)).shaped.<>({r=>import r._; _1.map(_=> MeetingRow.tupled((_1.get, _2.get, _3.get, _4.get, _5.get, _6.get, _7.get, _8.get, _9.get, _10.get)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val id: Rep[Int] = column[Int]("id", O.AutoInc, O.PrimaryKey)
val dateandtime: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("dateandtime")
val endtime: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("endtime")
val organisationid: Rep[Int] = column[Int]("organisationid")
val details: Rep[String] = column[String]("details", O.Default(""))
val adminid: Rep[Int] = column[Int]("adminid")
val datecreated: Rep[java.sql.Timestamp] = column[java.sql.Timestamp]("datecreated")
val title: Rep[String] = column[String]("title", O.Default(""))
val agenda: Rep[String] = column[String]("agenda", O.Default(""))
val meetingroom: Rep[Int] = column[Int]("meetingroom")
def getAttendees = Tables.Meeting2uzer.filter(_.meetingid === id)
where "ID" in the above function is a column in Meeting.
now the problem arises when i want to write that same function "getAttendees" in my DAO which doesn't have access to the columns in scope.
something along the lines of....
#Singleton
class SlickMeetingDAO #Inject()(db: Database)(implicit ec: ExecutionContext) extends MeetingDAO with Tables {
override val profile: JdbcProfile = _root_.slick.jdbc.PostgresProfile
import profile.api._
private val queryById = Compiled((id: Rep[Int]) => Meeting.filter(_.id === id))
def getAttendees = Meeting2uzer.filter(_.meetingid === "NEED ID OF COLUMN IN SCOPE")
How do i get 'id' which is a column in Tables.Meeting in scope in my DAO to finish this getAttendees function.
If I understand correctly, you're trying to join two tables?
Meeting2uzer.join(Meeting).on(_.meetingid === _.id)
What you've done inside the Tables.scala, would be more inline with creating a foreign key in Slick. You could create a slick foreign key, instead of using an explicit join. See the documentation for Slick here.
I have two tables that are identical, each in a different database. Also, the tables have different names.
I have hardcoded the name of the table in the class BankDB:
class BankDB(tag: Tag) extends Table[Bank](tag, "banks1") {
def sk = column[Int]("sk", O.PrimaryKey)
def name = column[String]("name")
// other columns
What I need is, depending on the database name, to set the name of the table, like so:
val tableName = if (dbName == "DB1") "banks1" else "banks2"
And then use tableName in TableQuery to have Slick point to the correct table:
val db = Database.forConfig(dbName)
try {
val banks = TableQuery[BankDB](tableName) // <== this doesn't work
val future = db.run(banks.filter(_.sk === sk).result)
val result = Await.result(future, Duration.Inf)
result
} finally db.close
Is this possible? or I need to define to classes, one for each database?
class BankDB(tag: Tag,tableName: String) extends Table[Bank](tag, tableName)
def getTableQuery(tableName: String) =
TableQuery[BankDB]((t: slick.lifted.Tag) => new BankDB(t, tableName))
I downloaded a starter application with Play Framework 2.5 and Slick 3.1, git here.
When I add a simple to column named "test" to Project.scala. I get this error:
[JdbcSQLException: Column "TEST" not found; SQL statement:
select "ID", "NAME", "TEST" from "PROJECT" [42122-187]]
I just change the Project case class by adding the argument test and ProjectsTable class with functions test, * and ?:
case class Project(id: Long, name: String, test: String)
private class ProjectsTable(tag: Tag) extends Table[Project](tag, "PROJECT") {
def id = column[Long]("ID", O.AutoInc, O.PrimaryKey)
def name = column[String]("NAME")
def test = column[String]("TEST")
def * = (id, name, test) <> (Project.tupled, Project.unapply)
def ? = (id.?, name.?, test.?).shaped.<>({ r => import r._; _1.map(_ => Project.tupled((_1.get, _2.get, _3.get))) }, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
}
And the function: create
def create(name: String): Future[Long] = {
val project = Project(0, name, "d")
db.run(Projects returning Projects.map(_.id) += project)
}
Thank you very much for your help!
By adding the field test to the ProjectsTable model you are telling Slick that the database table PROJECT has a column named test - but the table PROJECT does not have a column named test.
You can see the SQL schema here: https://github.com/nemoo/play-slick3-example/blob/6c122970d7506bedb9230e3a31c30a5c7e27e93b/conf/evolutions/default/1.sql
This requirement should be really easy, but I don't know why is not working. I want to delete a row based on it's id using slick with play framework.
I'm following this example from play-slick module, but compiler complains that value delete is not a member of scala.slick.lifted.Query[models.Tables.MyEntity,models.Tables.MyEntity#TableElementType].
My controller looks like:
def delete(id: Int) = DBAction{ implicit rs =>
val q = MyEntity.where(_.id === id)
q.delete
Ok("Entity deleted")
}
I've imported the play.api.db.slick.Config.driver.simple._
What am I doing wrong?
Edit:
My schema definition looks like:
class Cities(tag: Tag) extends Table[CityRow](tag, "cities") {
def * = (cod, name, state, lat, long, id) <> (CityRow.tupled, CityRow.unapply)
/** Maps whole row to an option. Useful for outer joins. */
def ? = (cod.?, name.?, state.?, lat, long, id.?).shaped.<>({r=>import r._; _1.map(_=> CityRow.tupled((_1.get, _2.get, _3.get, _4, _5, _6.get)))}, (_:Any) => throw new Exception("Inserting into ? projection not supported."))
val cod: Column[String] = column[String]("cod")
val name: Column[String] = column[String]("name")
val state: Column[Int] = column[Int]("state")
val lat: Column[Option[String]] = column[Option[String]]("lat")
val long: Column[Option[String]] = column[Option[String]]("long")
val id: Column[Int] = column[Int]("id", O.AutoInc, O.PrimaryKey)
/** Foreign key referencing Departamentos (database name fk_ciudades_departamentos1) */
lazy val stateFk = foreignKey("fk_cities_states", state, States)(r => r.idstate, onUpdate=ForeignKeyAction.NoAction, onDelete=ForeignKeyAction.NoAction)
}
I also had a look at that example some time ago and it looked wrong to me too, not sure wether I was doing something wrong myself or not, the delete function was always a bit tricky to get right, expecially using the lifted.Query (like you are doing). Later in the process I made it work importing the right drivers, in my case scala.slick.driver.PostgresDriver.simple._.
Edit after comment:
Probably you have an error in the shape function, hard to say without looking at your schema declaration. This is an example:
case class CityRow(id: Long, name: String) {
class City(tag: Tag) extends Table[CityRow](tag, "city") {
def * = (id, name) <>(CityRow.tupled, CityRow.unapply)
^this is the shape function.
def ? = (id.?, name).shaped.<>({
r => import r._
_1.map(_ => CityRow.tupled((_1.get, _2)))
}, (_: Any) => throw new Exception("Inserting into ? projection not supported."))
val id: Column[Long] = column[Long]("id", O.AutoInc, O.PrimaryKey)
val name: Column[String] = column[String]("name")
}