Using EFCore 6 ...
I have a variety of values in a column I want to use as the discriminator column.
There are certain values (SERVER_STS) I am mapping to entity classes
modelBuilder.Entity<QueueEntry>()
.HasDiscriminator<string>("TaskType")
.HasValue<StatusEntry>(Constants.TaskServerStatus)
.IsComplete(false);
I, ideally would want to map all the other values to generate a single entity class, QueueEntry. So something like
modelBuilder.Entity<QueueEntry>()
.HasDiscriminator<string>("TaskType")
.HasValue<StatusEntry>(Constants.TaskServerStatus)
.HasOtherValue<QueueEntry>();
I thought the IsComplete might help, but I'm only getting back the StatusEntry rows.
I can't see a way of doing this. I see there is an Expression overload for the HasDiscriminator method, but can't quite get my head around whether this might give me what I need.
So is this possible or will I need to segregate the entities at a higher level?
Related
I'm building a simple Scala Play app which stores data in a Cassandra DB using the Phantom DSL driver for Scala. One of the nice features of Cassandra is that you can do partial updates i.e. so long as you provide the key columns, you do not have to provide values for all the other columns in the table. Cassandra will merge the data into your existing record based on the key.
Unfortunately, it seems this doesn't work with Phantom DSL. I have a table with several columns, and I want to be able to do an update, specifying values just for the key and one of the data columns, and let Cassandra merge this into the record as usual, while leaving all the other data columns for that record unchanged.
But Phantom DSL overwrites existing columns with null if you don't specify values in your insert/update statement.
Does anybody know of a work-around for this? I don't want to have to read/write all the data columns every time, as eventually the data columns will be quite large.
FYI I'm using the same approach to my Phantom coding as in these examples:
https://github.com/thiagoandrade6/cassandra-phantom/blob/master/src/main/scala/com/cassandra/phantom/modeling/model/GenericSongsModel.scala
It would be great to see some code, but partial updates are possible with phantom. Phantom is an immutable builder, it will not override anything with null by default. If you don't specify a value it won't do anything about it.
database.table.update.where(_.id eqs id).update(_.bla setTo "newValue")
will produce a query where only the values you've explicitly set to something will be set to null. Please provide some code examples, your problem seems really strange as queries don't keep track of table columns to automatically add in what's missing.
Update
If you would like to delete column values, e.g set them to null inside Cassandra basically, phantom offers a different syntax which does the same thing:
database.table.delete(_.col1, _.col2).where(_.id eqs id)`
Furthermore, you can even delete map entries in the same fashion:
database.table.delete(_.props("test"), _.props("test2").where(_.id eqs id)
This assumes props is a MapColumn[Table, Record, String, _], as the props.apply(key: T) is typesafe, so it will respect the keytype you define for the map column.
Is there some scala relational database framework (anorm, squeryl, etc...) using postgres-like aggregators to produce lists after a group-by, or at least simulating its use?
I would expect two levels of implementation:
a "standard" one, where at least any SQL grouping with array_agg is translated to a List of the type which is being aggregated,
and a "scala ORM powered" one where some type of join is allowed so that if the aggregation is a foreign key to other table, a List of elements of the other table is produced. Of course this last thing is beyond the reach of SQL, but if I am using a more powerful language, I do not mind some steroids.
I find specially intriguing that the documentation of slick, which is based precisely in allowing scala group-by notation, seems to negate explicitly the output of lists as a result of the group-by.
EDIT: use case
You have the typical many-to-many table of, say, products and suppliers, pairs (p_id, s_id). You want to produce a list of suppliers for each product. So the postgresql query should be
SELECT p_id, array_agg(s_id) from t1 group by p_id
One could expect some idiomatic way to to this in slick, but I do not see how. Furthermore, if we go to some ORM, then we could also consider the join with the tables products and suppliers, on p_id and s_id respectively, and get as answer a zip (product, (supplier1, supplier2, supplierN)) containing the objects and not only the ids
I am also not sure if I understand you question correct, could you elaborate?
In slick you currently can not use postgres "array_agg" or "string_agg" as a method on type Query. If you want to use this specific function then you need to use custom sql. But: I added an issue some time ago (https://github.com/slick/slick/issues/923, you should follow this discussion) and we have a prototype from cvogt ready for this.
I needed to use "string_agg" in the past and added a patch for it (see https://github.com/mobiworx/slick/commit/486c39a7ed90c9ccac356dfdb0e5dc5c24e32d63), so maybe this is helpful to you. Look at "AggregateTest" to learn more about it.
Another possibility is to encapsulate the usage of "array_agg" in a database view and just use this view with slick. This way you do not need "array_agg" directly in slick.
You can use slick-pg.
It supports array_agg and other aggregate functions.
Your question is intriguing, care to elaborate a little on how it might ideally look? When you group by you often have an additional column, such as count(*) over and above the standard columns from your case class, so what would the type of your List be?
Most of my (anorm) methods either return a singleton (perhaps Option) or a List of that class's type. For each case class, I have an sqlFields variable (e.g. m.id, m.name, m.forManufacturer) and a single parser variable that I reference as either .as(modelParser.singleOpt) or .as(modelParser *). For foreign keys, a lazy val at the case class level (or def if it needs to be) is pretty useful. E.g. if I had Model and Manufacturer entities, with a foreign key forManufacturer on Model, then I might define a lazy val manufacturer : Manufacturer = ... in the case class of the model, so that at any time I can refer to model.manufacturer. I can define joins as their own methods, either in this way, or as methods in the companion object.
Not 100% sure I am answering your question, but thought this was a bit long for a comment.
Edit: If your driver supported parsing of postgresql arrays, you could map them directly to a class like ProductSuppliers(id:Int, suppliers:List[Int]) (or even List[Supplier]?) In anorm that's about as idiomatic as one could get, I think? For databases that don't support it, it seems to me similar to an order by version, i.e. select p1, s1 from t1 order by p1, which you could groupBy p1 and similarly map to ProductSuppliers.
I am using slick with play2.
I have multiple fields in the database which are managed by the database. I don't want to create or update them, however I want to get them while reading the values.
For example, suppose I have
case class MappedDummyTable(id: Int, .. 20 other fields, modified_time: Optional[Timestamp])
which maps Dummy in the database. modified_time is managed by the database.
The problem is during insert or update, I create an instance of MappedDummyTable without the modified time attribute and pass it to slick for create/update like
TableQuery[MappedDummyTable].insert(instanceOfMappedDummyTable)
For this, Slick creates query as
Insert INTO MappedDummyTable(id,....,modified_time) Values(1,....,null)
and updates the modified_time as NULL, which I don't want. I want Slick to ignore the fields while updating and creating.
For updating, I can do
TableQuery[MappedDummyTable].map(fieldsToBeUpdated).update(values)
but this leads to 20 odd fields in the map method which looks ugly.
Is there any better way?
Update:
The best solution that I found was using multiple projection. I created one projection to get the values and another to update and insert the data
maybe you need to write some triggers in table if you don't want to write code like row => (row.id,...other 20 fields)
or try use None instead of null?
I believe that the solution with mapping non-default field is the only way to do it with Slick. To make it less ugly you can define function ignoreDefaults on MappedDummyTable that will return only non default value and function in companion object to MappedDummyTable case class that returns projection
TableQuery[MappedDummyTable].map(MappedDummyTable.ignoreDefaults).insert(instanceOfMappedDummyTable.ignoreDefaults)
To achieve best performance and validation convenience, which of these annotations are needed for a String field?
database: MySQL
A field to store district name
#Column(length=50) // javax.persistence.Column
Is this going to be converted to varchar(50)? Or I need this one specifically:
#Column(columnDefinition='varchar(50)')
And another two annotations
#MaxLength(50) // play.data.validation.Constraints.MaxLength
#Length(max=50) // com.avaje.ebean.validation.Length, is this one useful or not required anyway?
public String districtName;
I think I need #Column(length=50) for definition and #MaxLength(50) for validation at same time? Or one of these two will imply the other one automatically?
Thanks.
As far as I know, when we mark String variable with these annotation:
#javax.persistence.Column(length=50)
#javax.persistence.Column(columnDefinition='varchar(50)'). Note: I am use postgreSQL, and this will create column definition with character varying data type
#com.avaje.ebean.validation.Length(50)
the three annotations above has the same purpose. Those will create column definition with character varying data type and length of 50 characters on database.
Without the #Constraint.MaxLength(50), you will get exception like below when you entered input value whose length greater than 50:
Execution Exception
[ValidationException: validation failed for: models.TheModel]
I think, there should be a way to handle above exception, but honestly I don't know how to do that until now.
Advice
My advice for you is to choose one out of the 3 annotations above (It is your preference) with the use of anotation #Constraint.MaxLength(50). For me, it is the easiest and the simplest way, and you can easily make the form using play framework scala-template-helper.
This is probably a super simple question, but I'm struggling to come up with the right keywords to find it on Google.
I have a Postgres table that has among its contents a column of type text named content_type. That stores what type of entry is stored in that row.
There are only about 5 different types, and I decided I want to change one of them to display as something else in my application (I had been directly displaying these).
It struck me that it's funny that my view is being dictated by my database model, and I decided I would convert the types being stored in my database as strings into integers, and enumerate the possible types in my application with constants that convert them into their display names. That way, if I ever got the urge to change any category names again, I could just change it with one alteration of a constant. I also have the hunch that storing integers might be somewhat more efficient than storing text in the database.
First, a quick threshold question of, is this a good idea? Any feedback or anything I missed?
Second, and my main question, what's the Postgres command I could enter to make an alteration like this? I'm thinking I could start by renaming the old content_type column to old_content_type and then creating a new integer column content_type. However, what command would look at a row's old_content_type and fill in the new content_type column based off of that?
If you're finding that you need to change the display values, then yes, it's probably a good idea not to store them in a database. Integers are also more efficient to store and search, but I really wouldn't worry about it unless you've got millions of rows.
You just need to run an update to populate your new column:
update table_name set content_type = (case when old_content_type = 'a' then 1
when old_content_type = 'b' then 2 else 3 end);
If you're on Postgres 8.4 then using an enum type instead of a plain integer might be a good idea.
Ideally you'd have these fields referring to a table containing the definitions of type. This should be via a foreign key constraint. This way you know that your database is clean and has no invalid values (i.e. referential integrity).
There are many ways to handle this:
Having a table for each field that can contain a number of values (i.e. like an enum) is the most obvious - but it breaks down when you have a table that requires many attributes.
You can use the Entity-attribute-value model, but beware that this is too easy to abuse and cause problems when things grow.
You can use, or refer to my implementation solution PET (Parameter Enumeration Tables). This is a half way house between between 1 & 2.