How to get column name for a unique_constraint violation? - postgresql

I am using the pq driver and I'm wondering why the pq.Error gives an empty Column when I face a unique constraint violation.
I could parse Detail but is there any reason why Column would be empty? It would be preferable if I could just get email from Column instead of parsing Detail
Here is what the error looks like:
Severity:"ERROR"
Code:"23505"
Message:"duplicate key value violates unique constraint "unique_users""
Detail:"Key (email)=(user3#email.com) already exists."
Hint:""
Position:""
InternalPosition:""
InternalQuery:""
Where:""
Schema:"public"
Table:"users"
Column:""
DataTypeName:""
Constraint:"unique_users"
File:"nbtinsert.c"
Line:"534"
Routine:"_bt_check_unique"
Unfortunately, the Column value is empty. I'm trying to come up with error messaging for my application and I'm wondering if there is some way to get that information so I can enhance the message why the entity was not created and let the caller know the field (email in this case) as well.

The specific error message (including the violated constraint and column/value) is in the field Detail.
EDIT:
I guess Column is empty, because the unique constraint might affect multiple columns. Constraint offers the constraint name, in combination with Schema and Table you can look up this constraint in pg_constraint. Field conkey might hold the references to the covered attributes.

Related

Copy data from Cosmos Db to table storage fails on custom RowKey

I'm trying to get a very simple data migration to work, where I want 3 fields from Cosmos Db documents to be inserted as entities in Table Storage.
The challenge seems to be in the fact that I want an Id from the document, also to be the value of the partition key and row key.
I took the Copy data activity, defined Cosmos Db as source, table storage as sink and defined mappings to get the right data into the right field.
In the sink you can specify what to do with partition key and row key.
When I specify the partition key to be the id from the document, it works.
However, when I do the same for row key (instead of a generated identifier), I get this error "The specified AzureTableRowKeyName 'UserId' doesn't exist in the source data".
The weird thing is that there appears to be no problem regarding the partition key for that value.
Any one who can point me in the right direction?
Thanks to #BhanunagasaiVamsi-MT for pointing me in the right direction.
For completeness sake, I'm dropping my solution here, although the link in that post also explains it.
You need to:
specify additional columns, based on the source data
select these colums as rowkey or partitionkey in the sink
assign the additional fields to rowkey and partitionkey in the mapping (feels like a duplicate thing to do, but if you don't you get the error mentioned in the question)
I tried to reproduce the same in my environment and got the same error as below:
If I specify unique identifier its working fine.
Note: Specify the name of the column whose column values are used as the row key. If not specified, use a GUID for each row
For more information refer this Microsoft documentation.

Custom Work Item Type: Adding Unique ID Constraint

I created a custom work item type (WIT) and I added a field of type integer for usage as unique identifier. It is called ID and is a required field.
I would like the following constraint:
When a user creates a new work item of this type and inserts a value for ID, a check is run to verify that there is no work item of this type that already has the same ID. If so the user should be prevented from creating the work item.
The point is to avoid having multiple work items of this type with duplicate unique IDs. I looked into the "Rules" section to see if I could add a constraint to check for pre-existing integers of the same value, and prevent the user from creating the WIT if it already exists in the system. However I was not able to find a way to do so. I also tried making the field of type identifier but that just forces you to user a person (not number) as an identifier.
Your goals are not clear in your question. You already have ID (or System.Id as system reference) for each work item type. You do not need to create something new. Rules in work items types do not support complex logic (Sample custom rule scenarios).
As a workaround (if you need the second id for your work item type), you can:
Set default value 0 for your field.
Create a custom app to:
Find 0 ids: Query By Wiql.
Updated them to the calculated value: Work Items - Update.

Postgres "reverse" foreign key contraint

I am not sure the best way to describe this, so please bear with me.
I have a database related to plant and animals species that occur in Oklahoma. One table therein, acctax (accepted taxonomy), contains the accepted name of a given taxon. Another table, syntax (synonym taxonomy), contains the synonyms of a taxon (one to many relationship).
We have devised a code (called acode), based on a taxon's scientific binomial, to uniquely identify each. For instance, the American bison (Bison bison) has the unique code M-BIBI. In the acctax table, this is the primary key. If another taxon had a similar binomial (for instance, a mammal whose generic and specific name both started with Bi, we would merely add a sequential number after it, e.g. M-BIBI2--fortunately, in this instance, this is only hypothetical, no such case exists).
The syntax table relates to the acctax table via this code. However, we also assign each synonym a unique identifier using the same convention (called the scode). This serves as the primary key in in the syntax table.
Where it becomes ridiculously tedious is the taxonomy occasionally changes. An accepted name can become a synonym and a synonym can become an accepted name. As a result, I absolutely need to ensure that no newly assigned "acode" is already in use as an "scode" in the syntax table and that no newly assigned "scode" is not already in use as an "acode" in the acctax table.
To partially solve this, I have created a view that unions the acode and scode from the acctax and syntax tables, respectively. Prior to adding new records, I can query the view to see if it a newly created code is already in use. However, I would prefer to have a constraint that will only allow the insert of a new acode or scode in the acctax and syntax tables, respectively, if that code does not already exist in the view I created.
Hopefully, this makes sense. I realize it is long and convoluted.

Avoid duplicate inserts without unique constraint in target table?

Source & target tables are similar.
Target table has a UUID field that is computed in tMap, however the flow should not insert duplicate persons in target i.e unique (firstname,lastname,dob,gender). I tried marking those columns as key in tMap as in below screenshot, but that does not prevent duplicate inserts. How can I avoid duplicate inserts without adding unique constraint on target?
I also tried "using field" in target.
Edit: Solution as suggested below:
The CDC components in the Paid version of Talend Studio for Data Integration undoubtedly address this.
In Open Studio, you'll can roll your own Change data capture based on the composite, unique key (firstname,lastname,dob,gender).
Use tUniqueRow on data coming from stage_geno_patients, unique on the following columns: firstname,lastname,dob,gender
Feed that into a tMap
Add another query as input to the tMap, to perform look-ups against the table behind "patients_test", to find a match on the firstname,lastname,dob,gender. That lookup should "Reload for each row" using looking up against values from the staging row
In the case of no-match, detect it and then do an insert of the staging row of data into the table behind "patients_test"
Q: Are you going to update information, also? Or, is the goal only to perform unique inserts where the data is not already present?

Insert record in table if does not exist in iPhone app

I am obtaining a json array from a url and inserting data into a table. Since the contents of the url are subject to change, I want to make a second connection to a url and check for updates and insert new records in y table using sqlite3.
The issues that I face are:
1) My table doesn't have a primary key
2) The url lists the changes on the same day. Hence, if I run my app multiple times, when I insert values in my database, I get duplicate entries. I want to keep a check for the day duplicated entries that should be removed. The problem can be solved by adding a constraint, but since the url itself has duplicated values, I find it difficult.
The only way I can see you can do it if you have no primary key or something you can use that is unique to each record, is when you get your new data in you go through the new entries where for each one you check if the exact same data exists in the database already. If it doesn't then you add it, if it does then you skip over it.
You could even do something like create a unique key yourself for each entry which is a concatenation of each column of the table. That way you can quickly do the check for if the entry already exists in the database.
I see two possibilities depending on your setup:
You have a column setup as UNIQUE (this can be through a PRIMARY KEY or not). In this case, you can use the ON CONFLICT clause:
http://www.sqlite.org/lang_conflict.html
If you find this construct a little confusing, you can instead use "INSERT OR REPLACE" or "INSERT OR IGNORE" as described here:
http://www.sqlite.org/lang_insert.html
You do not have a column setup as UNIQUE. In this case, you will need to SELECT first to verify for duplicate data, and based on the result INSERT, UPDATE, or do nothing.
A more common & robust way to handle this is to associate a timestamp with each data item on the server. When your app interrogates the server it provides the timestamp corresponding to the last time it synced. The server then queries its database and returns all values that are timestamped later than the timestamp provided by the app. Then it also returns a new timestamp value for the app to store, to use on the next sync.