How to update a field which is indexed? - scala

I want to update a field in Cassandra which is indexed using phantom scala sdk like:
this.update.where(_.id eqs folderId)
.and(_.owner eqs owner)
.modify(_.parent setTo parentId)
the parent field is a indexed field in table. But the operation is not allowed when compile the code, there will have compile exception like:
[error] C:\User\src\main\scala\com\autodesk\platform\columbus\cassandra\DataItem.scala:161: could not find implicit value for evidence parameter of type com.websudos.phantom.column.ModifiableColumn[T]
The error is caused by update the field which is indexed.
My workaround is to delete the record and insert a new record to "update" the record.
Is there a better way for the situation?

You are not allowed to update a field that is part of the primary key, because if you do so you are rendering Cassandra unable to ever re-compose the hash of the row you are updating.
Read here for details on the topic. In essence, if you had a HashMap[K, V] what you are trying to do is update the K, but in doing so you will never be able to retrieve the same V again.
So in Cassandra, just like in the HashMap, an update to an index is done with a DELETE and then a new INSERT. That's why phantom intentionally prevents you from compiling your query, I wrote those compile time restrictions in for the specific purposes of preventing invalid CQL.

Related

Why is unique contraint not working in Ecto?

I have the following in my User model :
def changeset(user, attrs) do
user
|> cast(attrs, [:login, :email])
|> validate_required([:login, :email])
|> unique_constraint(:login)
|> unique_constraint(:email)
end
However just setting the unique_contraint this way does not work. I'm still getting duplicate login and email when testing my controller.
I got this working but I had to put the :unique keyword argument to true in the model schema as well as create a unique index per column in the migration.
Is Ecto not checking the constraint itself in addition to the PostgreSQL unique index ? Is there any point to adding a unique_constraint to the changeset/2 function ?
The unique constraint works by relying on the database to check if the unique constraint has been violated or not and, if so, Ecto converts it into a changeset error.
— Ecto.Changeset.unique_constraint/3
That said, the reason for unique_constraint/3 to ever exist is to unify errors (make the changeset error out of what was received from the DB.) That obviously eases and standardizes the error handling.
Ecto obviously cannot check the constraint on its own, without relying on DB.

Is 'jdbcType' necessary in a MyBatis resultMap?

When we use Mybatis , in <select> ...</select> statment I know we need set jdbcType beacuse the IN variable maybe null, but when I see the document of Mybatis, I found jdbcType in <result>...</result> under ResultMap. the document of the
jdbcTpe in <result>...</result> was:
... The JDBC type is only required for nullable columns upon insert, update or delete. This is a JDBC requirement, not a MyBatis one. So even if you were coding JDBC directly, you'd need to specify this type – but only for nullable values.
the bold word say only required for nullable columns upon insert, update or delete.
BUT,the element of result is used in select neither insert, update or delete.
so ,is it necessary use jdbcType in <result>...</result> ?
Most of the time, no. Why? Read on.
If you want to use a null as a JDBC parameter value you need to specify the jdbcType. That's a restriction of the JDBC specification you can't avoid. Therefore, if there's even a remote possibility a JDBC parameter could have a null value, then yes, specify it.
This does not apply to parameters preprocessed by MyBatis inside MyBatis tags, like the ones you use in the "test" attribute of the < if > tag. Those ones are not JDBC parameters.
Now, for the columns you read. These are the ones you are interested on. The thing is most of the time you don't need them. MyBatis will pick the right JDBC type for you. Well... this has been the case for me 99.999% of the time.
What about the other 0.001%? For some exotic column types -- that you rarely use -- MyBatis may pick the wrong JDBC type for you. The designers of MyBatis thought about this case, and give you the chance of overriding it. I think I remember an XML type database column that MyBatis was unsuccessfully trying to read as a VARCHAR, but I don't remember which database.
Bottom line, don't use it when reading columns, unless MyBatis reads exotic data type columns (XML, UUID, POINT, etc.) the wrong way.

How can we check existence of element in postgresql scala?

I am new to Scala and Slick. I have a problem with a proper way how to check existence of item in DB(postgresql). So, I need to implement insert if exist and update method. I have done some update but it does not work in a proper way and there is an error occurs.
ERROR: duplicate key value violates unique constraint
"IDX_COMPETENCE_SID_UID"_ Detail: Key ("SKILL_ID", "USER_ID")=(2,
20198) already exists. [Sanitized]
def update(skillRow: SkillWithVisibility): DBIO[Int] = {
//TODO skill existence check?
selectByIdForUpdateQ(skillRow.id, skillRow.companyId) update skillRow }
What is the best way to modify this method to check skill existence and update it if exist?
You can use insertOrUpdate or write your own if you need to. You can read about it in this underscore blog post

Partial inserts with Cassandra and Phantom DSL

I'm building a simple Scala Play app which stores data in a Cassandra DB using the Phantom DSL driver for Scala. One of the nice features of Cassandra is that you can do partial updates i.e. so long as you provide the key columns, you do not have to provide values for all the other columns in the table. Cassandra will merge the data into your existing record based on the key.
Unfortunately, it seems this doesn't work with Phantom DSL. I have a table with several columns, and I want to be able to do an update, specifying values just for the key and one of the data columns, and let Cassandra merge this into the record as usual, while leaving all the other data columns for that record unchanged.
But Phantom DSL overwrites existing columns with null if you don't specify values in your insert/update statement.
Does anybody know of a work-around for this? I don't want to have to read/write all the data columns every time, as eventually the data columns will be quite large.
FYI I'm using the same approach to my Phantom coding as in these examples:
https://github.com/thiagoandrade6/cassandra-phantom/blob/master/src/main/scala/com/cassandra/phantom/modeling/model/GenericSongsModel.scala
It would be great to see some code, but partial updates are possible with phantom. Phantom is an immutable builder, it will not override anything with null by default. If you don't specify a value it won't do anything about it.
database.table.update.where(_.id eqs id).update(_.bla setTo "newValue")
will produce a query where only the values you've explicitly set to something will be set to null. Please provide some code examples, your problem seems really strange as queries don't keep track of table columns to automatically add in what's missing.
Update
If you would like to delete column values, e.g set them to null inside Cassandra basically, phantom offers a different syntax which does the same thing:
database.table.delete(_.col1, _.col2).where(_.id eqs id)`
Furthermore, you can even delete map entries in the same fashion:
database.table.delete(_.props("test"), _.props("test2").where(_.id eqs id)
This assumes props is a MapColumn[Table, Record, String, _], as the props.apply(key: T) is typesafe, so it will respect the keytype you define for the map column.

Discard values while inserting and updating data using slick

I am using slick with play2.
I have multiple fields in the database which are managed by the database. I don't want to create or update them, however I want to get them while reading the values.
For example, suppose I have
case class MappedDummyTable(id: Int, .. 20 other fields, modified_time: Optional[Timestamp])
which maps Dummy in the database. modified_time is managed by the database.
The problem is during insert or update, I create an instance of MappedDummyTable without the modified time attribute and pass it to slick for create/update like
TableQuery[MappedDummyTable].insert(instanceOfMappedDummyTable)
For this, Slick creates query as
Insert INTO MappedDummyTable(id,....,modified_time) Values(1,....,null)
and updates the modified_time as NULL, which I don't want. I want Slick to ignore the fields while updating and creating.
For updating, I can do
TableQuery[MappedDummyTable].map(fieldsToBeUpdated).update(values)
but this leads to 20 odd fields in the map method which looks ugly.
Is there any better way?
Update:
The best solution that I found was using multiple projection. I created one projection to get the values and another to update and insert the data
maybe you need to write some triggers in table if you don't want to write code like row => (row.id,...other 20 fields)
or try use None instead of null?
I believe that the solution with mapping non-default field is the only way to do it with Slick. To make it less ugly you can define function ignoreDefaults on MappedDummyTable that will return only non default value and function in companion object to MappedDummyTable case class that returns projection
TableQuery[MappedDummyTable].map(MappedDummyTable.ignoreDefaults).insert(instanceOfMappedDummyTable.ignoreDefaults)