HOW TO: squeryl full update - scala

I'm new to squeryl and I have a question in squeryl full updates..
Can anybody please explain what is actually a full update and how it is done ?
I couldn't really understand full update in squeryl guide.
Thanx...

A partial update is similar to calling Update in SQL. You give values for some fields, and a where clause determines on which row the update happens.
With a full update, you simply give an object of the type that is mapped to the table, it means update the row with the same primary key as the object, set all fields (hence "full" update) to the value they have in the object. You simply call the update method on the table, passing the object (you can also pass a collection (Iterable) of them, updating them all).

Related

Loopback: conditional value on upsert

so I am having this mass import of data to my DB and conditional whether a record exists or not I am inserting or updating (upsert)...
now, what I need is when it is inserting (new record) to populate the "created" property's value and 'modified' value with the current datetime... but only modified one when it is getting updated...
so if I would use operational hook of before save.... how do I know it is for an inserting record or for an updating record?
any ideas?
I solved it in different manner (would still be good to know the original answer)
in my case I upserted them all normaly with updated to current date, after, I find all with created:null and update just those with the created value.

Guava HashMultimap update equal value

I have a hashmultimap, the current behavior of this data structure is whenever I try to insert a value that already exists, it does not insert, however I would like a different behavior: if the object I want to insert as value is equals, I want to update this value.
This behaviour is the result of HashMultimap using a standard HashSet for its value collection. The contract of Set#add is:
If this set already contains the element, the call leaves the set unchanged
However, the contract of multimap does not require this. If you create a Collection implementation that has the update behaviour you desire, you can use Multimaps#newMultimap to create a multimap using that backing collection type.
I would caution though that this requirement seems suspect...the fact that you're trying to update the value objects while they are being used in a Set is somewhat smelly. It could be that what you really want is something like Map<CompositeKey<CurrentKey, CurrentValue>, State>. Then the update behavior simply becomes a put.

TSQL - force update of persisted computed column

I have a persisted computed column in one table with the value calculated using a user function. How can I force that column to be updated without updating any other column in that table?
UPDATE: So as it turns out, this will not work as I imagined it.
I wanted to have user function that contains sub-query in it, gets me some data and stores it in computed column. But SQL Server won't allow this...
It looks like I will have to do something similar with insert/update triggers.
If you persist the value by adding the PERSISTED keyword, the value is both retained on insert and will be synchronized when the referenced column is updated.

How do I ignore the created column on a Zend_DB table save?

how would I ignore having Zend_DB save() from trying to fill out a created column? I do not need that column for a certain model.
Don't send the data. save() is part of the Zend_Db_Table_Row api and is designed to be somewhat intelligent in the way it saves data to a row. It will perform an insert or an update of a row depending on what is required.
save() will also only update the columns that it has data for. If you don't send new data for your created column save() won't overwrite the data.
When ever it is possible I let the database I'm using create and update the columns for created and updated. That way I have the information available to query if I need it but I don't have to do something with PHP that My database can do better.
Check out http://framework.zend.com/manual/1.12/en/zend.db.table.html Section "Advanced usage".
For more specific and optimized requests, you may wish to limit the
number of columns returned in a row or rowset. This can be achieved by
passing a FROM clause to the select object. The first argument in the
FROM clause is identical to that of a Zend_Db_Select object with the
addition of being able to pass an instance of Zend_Db_Table_Abstract
and have it automatically determine the table name.
Important
The rowset contains rows that are still 'valid' - they simply contain
a subset of the columns of a table. If a save() method is called on a
partial row then only the fields available will be modified.
So, if you called an update() I think it would be as simple as unsetting the value for the column you don't want to touch. Of course database constraints will need to be honored - i.e. column should allow nulls.

Dynamic auditing of data with PostgreSQL trigger

I'm interested in using the following audit mechanism in an existing PostgreSQL database.
http://wiki.postgresql.org/wiki/Audit_trigger
but, would like (if possible) to make one modification. I would also like to log the primary_key's value where it could be queried later. So, I would like to add a field named something like "record_id" to the "logged_actions" table. The problem is that every table in the existing database has a different primary key fieldname. The good news is that the database has a very consistent naming convention. It's always, _id. So, if a table was named "employee", the primary key is "employee_id".
Is there anyway to do this? basically, I need something like OLD.FieldByName(x) or OLD[x] to get value out of the id field to put into the record_id field in the new audit record.
I do understand that I could just create a separate, custom trigger for each table that I want to keep track of, but it would be nice to have it be generic.
edit: I also understand that the key value does get logged in either the old/new data fields. But, what I would like would be to make querying for the history easier and more efficient. In other words,
select * from audit.logged_actions where table_name = 'xxxx' and record_id = 12345;
another edit: I'm using PostgreSQL 9.1
Thanks!
You didn't mention your version of PostgreSQL, which is very important when writing answers to questions like this.
If you're running PostgreSQL 9.0 or newer (or able to upgrade) you can use this approach as documented by Pavel:
http://okbob.blogspot.com/2009/10/dynamic-access-to-record-fields-in.html
In general, what you want is to reference a dynamically named field in a record-typed PL/PgSQL variable like 'NEW' or 'OLD'. This has historically been annoyingly hard, and is still awkward but is at least possible in 9.0.
Your other alternative - which may be simpler - is to write your audit triggers in plperlu, where dynamic field references are trivial.