Rename enum item in PostgreSQL - postgresql

I would like to change the name of an item in an enum type in PostgreSQL 9.1.5.
Here is the type's create stmt:
CREATE TYPE import_action AS ENUM
('Ignored',
'Inserted',
'Updated',
'Task created');
I just want to change 'Task created' to 'Aborted'. It seems like from the documentation, that the following should work:
ALTER TYPE import_action
RENAME ATTRIBUTE "Task created" TO "Aborted";
However, I get a msg:
********** Error **********
ERROR: relation "import_action" does not exist
SQL state: 42P01
But, it clearly does exist.
The type is currently being used by more than one table.
I'm being to think that there must not be a way to do this. I've tried the dialog for the type in pgAdminIII, but there is no way that I can see to rename the it there. (So, either a strong hint that I can't do it, or - I'm hoping - a small oversight be the developer that created that dialog)
If I can't do this in one statment? Then what do I need to do? Will I have to write a script to add the item, update all of the records to new value, then drop the old item? Will that even work?
It's seems like this should be a simple thing. As I understand it, the records are just storing a reference to the type and item. I don't think they are actually store the text value that I have given it. But, maybe I'm wrong here as well.

In PostgreSQL version 10, the ability to rename the labels of an enum has been added as part of the ALTER TYPE syntax:
ALTER TYPE name RENAME VALUE 'existing_enum_value' TO 'new_enum_value'

Update: For PostgreSQL version 10 or later, see the top-voted answer.
Names of enum values are called labels, attributes are something different entirely.
Unfortunately changing enum labels is not simple, you have to muck with the system catalog:
http://www.postgresql.org/docs/9.1/static/catalog-pg-enum.html
UPDATE pg_enum SET enumlabel = 'Aborted'
WHERE enumlabel = 'Task created' AND enumtypid = (
SELECT oid FROM pg_type WHERE typname = 'import_action'
)

The query in the accepted answer doesn't take into account schema names. Here's a safer (and simpler) one, based on http://tech.valgog.com/2010/08/alter-enum-in-postgresql.html
UPDATE pg_catalog.pg_enum
SET enumlabel = 'NEW_LABEL'
WHERE enumtypid = 'SCHEMA_NAME.ENUM_NAME'::regtype::oid AND enumlabel = 'OLD_LABEL'
RETURNING enumlabel;
Note that this requires the "rolcatupdate" (Update catalog directly) permission - even being a superuser is not enough.
It seems that updating the catalog directly is still the only way as of PostgreSQL 9.3.

There's a difference between types, attributes, and values. You can create an enum like this.
CREATE TYPE import_action AS ENUM
('Ignored',
'Inserted',
'Updated',
'Task created');
Having done that, you can add values to the enum.
ALTER TYPE import_action
ADD VALUE 'Aborted';
But the syntax diagram doesn't show any support for dropping or renaming a value. The syntax you were looking at was the syntax for renaming an attribute, not a value.
Although this design is perhaps surprising, it's also deliberate. From the pgsql-hackers mailing list.
If you need to modify the values used or want to know what the integer
is, use a lookup table instead. Enums are the wrong abstraction for
you.

Related

Redshift Spectrum table doesnt recognize array

I have ran a crawler on json S3 file for updating an existing external table.
Once finished I checked the SVL_S3LOG to see the structure of the external table and saw it was updated and I have new column with Array<int> type like expected.
When I have tried to execute select * on the external table I got this error: "Invalid operation: Nested tables do not support '*' in the SELECT clause.;"
So I have tried to detailed the select statement with all columns names:
select name, date, books.... (books is the Array<int> type)
from external_table_a1
and got this error:
Invalid operation: column "books" does not exist in external_table_a1;"
I have also checked under "AWS Glue" the table external_table_a1 and saw that column "books" is recognized and have the type Array<int>.
Can someone explain why my simple query is wrong?
What am I missing?
Querying JSON data is a bit of a hassle with Redshift: when parsing is enabled (eg using the appropriate SerDe configuration) the JSON is stored as a SUPER type. In your case that's the Array<int>.
The AWS documentation on Querying semistructured data seems pretty straightforward, mentioning that PartiQL uses "dotted notation and array subscript for path navigation when accessing nested data". This doesn't work for me, although I don't find any reasons in their SUPER Limitations Documentation.
Solution 1
What I have to do is set the flags set json_serialization_enable to true; and set json_serialization_parse_nested_strings to true; which will parse the SUPER type as JSON (ie back to JSON). I can then use JSON-functions to query the data. Unnesting data gets even crazier because you can only use the unnest syntax select item from table as t, t.items as item on SUPER types. I genuinely don't think that this is the supposed way to query and unnest SUPER objects but that's the only approach that worked for me.
They described that in some older "Amazon Redshift Developer Guide".
Solution 2
When you are writing your query or creating a query Redshift will try to fit the output into one of the basic column data types. If the result of your query does not match any of those types, Redshift will not process the query. Hence, in order to convert a SUPER to a compatible type you will have to unnest it (using the rather peculiar Redshift unnest syntax).
For me, this works in certain cases but I'm not always able to properly index arrays, not can I access the array index (using my_table.array_column as array_entry at array_index syntax).

what does the dot separator mean in postgres configuration parameters?

The question is pretty much summed up in the title:
What does the dot stand for when naming a configuration parameter in postgres?
For example:
SET bar TO true;
results in ERROR: unrecognized configuration parameter "bar", but
SET foo.bar TO true;
results in Query returned successfully ...
Why is that? FYI, I am using PostgreSQL 9.4
I haven't been able to find a clear answer to this in the documentation.
As per http://www.postgresql.org/docs/9.4/static/sql-syntax-lexical.html :
The period (.) is used in numeric constants, and to separate schema,
table, and column names.
But that certainly doesn't seem to be the case here, as there is no schema, table or column with the name foo. Search_path is set to default (public).
Found the answer:
http://www.postgresql.org/docs/9.4/static/runtime-config-custom.html
Custom options have two-part names: an extension name, then a dot,
then the parameter name proper, much like qualified names in SQL. An
example is plpgsql.variable_conflict.
Because custom options may need to be set in processes that have not
loaded the relevant extension module, PostgreSQL will accept a setting
for any two-part parameter name.

Postgresql 9.1 enum type ordering doesn't work like I expect

I've got an enum type that I've become interested in ordering in a particular way. I've written and run the SQL to impose a new ordering by sorting the labels (externally) by my new criteria and then updating the enumsortorder for all the values.
It doesn't work. I've verified that I've satisfied the (really weird) rule that the sort ordering feature works only on enum types with even oids; my oid for this type is even (58016). As far as I can tell, the ordering being imposed when I ORDER BY the enum column is exactly the same as what it was before.
Is there something else I need to do in order to make this work? The PostgreSQL documentation makes me think it should work.
Even oids have fixed ordering so you can't reorder them by modifying the pg-enum system table.
you're going to have to replace the existing enum with a new enum type. this means
Creating a new enum type
Dropping any relationships that use the enum.
Update the columns to the new type using something like
ALTER TABLE foo ALTER COLUMN bar
TYPE TO new_enum_type
USING (bar::text)::new_enum_type;
Here the cast to text matches the new enum values to the old enum values by their name.
Finally you need to recreate all the dropped relationships.
If needed you can run all this DML inside a transaction block.
Expect it to be slow if you have lots of data as it's rewriting whole
tables

tSQLt FakeTable fails when user-defined date types are used

I got the following error when I tried to fake a table that uses user-defined data types.
COLLATE clause cannot be used on user-defined data types.{,1}
It looks like it's a known issue in tSQLt.
https://groups.google.com/forum/?fromgroups#!topic/tsqlt/AS-Eqy6BjlA
Besides altering table definition, is there a workaround? Thanks.
Thanks to Chris Francisco, there is now a patch:
FYI I ran into the same issue and was able to get around it by digging
into the tSQLt code a bit. There's a Function named
Private_GetFullTypeName
The last column returned by the function is named "Collation" which
returns an empty string '' if the variable #CollationName is null. If
you add "OR is_user_defined = 1" to the condition for the empty string
I've been able to fake tables with user defined data types.
I haven't run into any weird side affects from this alteration but I'm
no expert on the inner workings of tSQLt (or on tSQLt in general) so
this may break functionality elsewhere.
In other words:
CASE WHEN #CollationName IS NULL OR is_user_defined = 1 THEN ''
ELSE ' COLLATE ' + #CollationName

duplicate primary key in return table created by select union

I have the following query called searchit
SELECT 2 AS sourceID, BLOG_COMMENTS.bID, BLOG_TOPICS.Topic_Title,
BLOG_TOPICS.LFD, BLOG_TOPICS.LC,
BLOG_COMMENTS.Comment_Narrative
FROM BLOG_COMMENTS INNER JOIN BLOG_TOPICS
ON BLOG_COMMENTS.bID = BLOG_TOPICS.bID
WHERE (BLOG_COMMENTS.Comment_Narrative LIKE #Phrase)
This query executes AND returns the correct results in the query builder!
HOWEVER, the query needs to run in code-behind, so I have the following line:
DataTable blogcomments = btad.searchit(aphrase);
There are no null fields in any row of any column in EITHER of the tables. The tables are small enough I can easily detect null data. Note that bID is key for blog_topics and cID is key for blog comments.
In any case, when I run this I get the following error:
Failed to enable constraints. One or more rows contain values
violating non-null, unique, or foreign-key constraints.
Tables have a 1 x N relationship, many comments for each blog entry. IF I run the query with DISTINCT and remove the Comment_Narrative from the return fields, it returns data correctly (but I need the other rows!) However, when I return the other rows, I get the above error!
I think tells me that there is a constraint on the return table that I did not put there, therefore it must somehow be inheriting that constraint from the call to the query itself because one of the tables happens to have a primary key defined (which it MUST have). But why does the query work fine in the querybuilder? The querybuilder does not care that bID is duped in the result (and it should not be), but the code-behind DOES care.
Addendum:
Just as tests,
I removed the bID from the return list and I still get the error.
I removed the primary key from blog_topics.bID and I get the same error.
This kinda tells me that it's not the fact that my bID is duped that is causing the problem.
Another test:
I went into the designer code (I know it's nasty, I'm just desperate).
I added the following:
// zzz
try
{
this.Adapter.Fill(dataTable);
}
catch ( global::System.Exception ex )
{
}
Oddly enough, when I run it, I get the same error as before AND it doesn't show the changes I've made in the error message:
Line 13909: }
Line 13910: BPLL_Dataset.BLOG_TOPICSDataTable dataTable = new BPLL_Dataset.BLOG_TOPICSDataTable();
Line 13911: this.Adapter.Fill(dataTable);
Line 13912: return dataTable;
Line 13913: }
I'm stumped.... Unless maybe it sees I'm not doing anything in the try catch and is optimizing for me.
Another addendum:
Suspecting that it was ignoring the test code I added to the designer, I added something to the catch. It produces the SAME error and acts like it does not see this code. (Well, okay, it DOES NOT see this code, because it prints out same as before into the browser.)
// zzz
try
{
this.Adapter.Fill(dataTable);
}
catch ( global::System.Exception ex )
{
System.Web.HttpContext.Current.Response.Redirect("errorpage.aspx");
}
The thing is, when I made the original post, I was ALREADY trying to do a work-around. I'm not sure how far I can afford to go down the rabbit hole. Maybe I read the whole mess into C# and do all the joins and crap myself. I really hate to do that, because I've only recently gotten out of the habit, but I perceive I'm making a good faith effort to use the the tool the way God and Microsoft intended. From wit's end, tff.
You don't really show how you're running this query from C# ... but I'm assuming either as a straight text in a SqlCommand or it's being done by some ORM ... Have you attempted writing this query as a Stored Procedure and calling it that way? The stored Procedure would be easier to test and run by itself with sample data.
Given the fact that the error is mentioning null values I would presume that, if it is a problem with the query and not some other element of your code, then it'd have to be on one of the following fields:
BLOG_COMMENTS.bID
BLOG_TOPICS.bID
BLOG_COMMENTS.Comment_Narrative
If any of those fields are Nullable then you should be doing a COALESCE or an ISNULL on them before using them in any comparison or Join. It's situations like these which explain why most DBAs prefer to have as few nullable columns in tables as possible - they cause overhead and are prone to errors.
If that still doesn't fix your problem, then COALESCE/ISNULL all fields that are nullable and are being returned by this query. Take all null values out of the equation and just get the thing working and then, if you really need the null values to be null, go back through and remove the COALESCE/ISNULLs one at a time until you find the culprit.
My problem came from ignorance and a bit of dullness. I did not realize that just because a field is a key in the sql table does mean it has to be a key in the tableadapter. If one has a key field defined in the SQL table and then creates a table adapter, the corresponding field in the adapter will also be a key. All I had to do was unset the key field in the tableadapter and it worked.
Solution:
Select the key field in the adapter.
Right click
Select "Delete Key" (keeps the field, but removes the "key" icon)
That's it.