I am using SQL Server 2012 and I want to create a "changes" table - it will be populated with data from other table when the second table columns values are changed.
I am adding to the "changes" table "datatime2", and "rowversion" columns in order to track when the changes are made.
Is it ok to use "rowversion" as primary key?
I have read here that it will be changed, if the current row is updated and that's why it is not a good candidate for "primary key" making foreign keys invalid.
Anyway, if it won't be used as a foreign key and the rows of "changes" table will never be updated (only new rows will be inserted) is it ok to use the "rowversion" as PK or I should use additional column?
Some good info here:
Careful reading of the MSDN page also shows that duplicate rowversion values are possible if SELECT INTO statements are used improperly. Something to watch out for there.
I would stick with an Identity field in the original data, carried over into the change tracking table that has its own Identity field.
Related
I am trying to populate some tables using data that I extracted from Google BigQuery. For that purpose I essentially normalized a flattened table into multiple tables that include the primary key of each row in the multiple tables. The important point is that I need to load those primary keys in order to satisfy foreign key references.
Having inserted this data into tables, I then try to add new rows to these tables. I don't specify the primary key, presuming that Postgres will auto-generate those key values.
However, I always get a 'duplicate key value violates unique constraint "xxx_pkey" ' type error, e.g.
"..duplicate key value violates unique constraint "collection_pkey" DETAIL: Key (id)=(1) already exists.
It seems this is triggered by including the primary key in the data when initializing table. That is, explicitly setting primary keys, somehow seems to disable or reset the expected autogeneration of the primary key. I.E. I was expecting that new rows would be assigned primary keys starting from the highest value already in a table.
Interestingly I get the same error whether I try to add a row via SQLAlchemy or from the psql console.
So, is this as expected? And if so, is there some way to get the system to again auto-generate keys? There must be some hidden psql state that controls this...the schema is unchanged by directly inserting keys, but psql behavior is changed by that action.
I am happy to provide additional information.
Thanks
In table A I've got a composite of 3 columns as a primary key. I want to have only one of these three columns as a foreign key in table B, just to make sure that the value that I insert into table B's column exists in table A.
Currently from what I've read it looks like in Entity Framework I have to add all three columns of composite PK, which is not really what I need. The latest answer that I've found was of 2015, maybe since then something changed?
I know that I can add a manual check on each insert/update call, but I don't want to do that, maybe there is more elegant way.
I have a parent table (Assessment) with two children tables with 1 to 1 relationships defined. To make sure that a child row is never added that does not have a parent entry, I want to add an insert trigger to the child table (ConsequenceAssessment) in this case. The following ConsequenceAssessment BeforeChange trigger fires but I cannot find how to reference the INSERTED rowset. There is an OLD recordset that works for an update; but, how do I access the inserted row. The following is my best attempt - but, the ConsequenceAssessment table does not yet include the new row and therefore, the trigger always hits the RaiseError.
UPDATE: Just found out that I can enforce Referential Integrity on a one-to-one relationship within Access (rookie misunderstanding). I would still like to know how to access the updated recordset. With MS SQL Server, this is implemented via the INSERTED table which is available within the scope of an INSERT trigger. So, what is the equivalent in MS Access.
In a Before Change data macro, [fieldname] refers to the new value and [old].[fieldname] refers to the old value (which would be Null for an insert).
In your particular case [ConsequenceAssessment].[id] appears to be the primary key for that table, not a foreign key referring to the [Assessment] (parent) table. So, the lookup is simply searching for the wrong key value in the parent table.
I am trying to use Entity Framework DB first to do quick prototyping of a reporting website for a huge db. The problem is one of the tables doesn't have a key. I got an 'Error 159: EntityType has no key defined'. If I add a key on the model designer, I got 'Error 3024: Must specify mapping for all key properties'. My question is whether there is a way to workaround this WITHOUT adding a key to the table. The table is not in our control.
Huge table which does not have a key? It would not be possible for you or for table owner to search for anything in this table without using full table scan. Also, it is basically impossible to use UPDATE by single row without having primary key.
You really have to either create synthetic key, or ask owner to do that. As a workaround, you might be able to find some existing column (or 2-3 columns) which is unique enough that it can be used as unique key. If it is unique but does not have actual index created, that would be still not good for performance - you should create such index.
I have a "services" table for detailing services that we provide. Among the data that needs recording are several small one-to-many relationships (all with a foreign key constraint to the service_id) such as:
service_owners -- user_ids responsible for delivery of service
service_tags -- e.g. IT, Records Management, Finance
customer_categories -- ENUM value
provider_categories -- ENUM value
software_used -- self-explanatory
The problem I have is that I want to keep a history of updates to a service, for which I'm using an update trigger on the table, that performs an insert into a history table matching the original columns. However, if a normalized approach to the above data is used, with separate tables and foreign keys for each one-to-many relationship, any update on these tables will not be recognised in the history of the service.
Does anyone have any suggestions? It seems like I need to store child keys in the service table to maintain the integrity of the service history. Is a delimited text field a valid approach here or, as I am using postgreSQL, perhaps arrays are also a valid option? These feel somewhat dirty though!
Thanks.
If your table is:
create table T (
ix int identity primary key,
val nvarchar(50)
)
And your history table is:
create table THistory (
ix int identity primary key,
val nvarchar(50),
updateType char(1), -- C=Create, U=Update or D=Delete
updateTime datetime,
updateUsername sysname
)
Then you just need to put an update trigger on all tables of interest. You can then find out what the state of any/all of the tables were at any point in history, to determine what the relationships were at that time.
I'd avoid using arrays in any database whenever possible.
I don't like updates for the exact reason you are saying here...you lose information as it's over written. My answer is quite simple...don't update. Not sure if you're at a point where this can be implemented...but if you can I'd recommend using the main table itself to store historical (no need for a second set of history tables).
Add a column to your main header table called 'active'. This can be a character or a bit (0 is off and 1 is on). Then it's a bit of trigger magic...when an update is preformed, you insert a row into the table identical to the record being over-written with a status of '0' (or inactive) and then update the existing row (this process keeps the ID column on the active record the same, the newly inserted record is the inactive one with a new ID).
This way no data is ever lost (admittedly you are storing quite a few rows...) and the history can easily be viewed with a select where active = 0.
The pain here is if you are working on something already implemented...every existing query that hits this table will need to be updated to include a check for the active column. Makes this solution very easy to implement if you are designing a new system, but a pain if it's a long standing application. Unfortunately existing reports will include both off and on records (without throwing an error) until you can modify the where clause