I am trying to change length of a column from character varying(40) to character varying(100).
Following the method described in this question Increasing the size of character varying type in postgres without data loss
ALTER TABLE info_table ALTER COLUMN docs TYPE character varying(100);
Tried with this command but returning syntax error
ERROR: syntax error at or near "TYPE" at character 52
Is there any change needed in this command? Using PostgreSQL version 7.4.30 (upgrade to 9.2 in process :) ).
I tried this same command in test db which is now upgraded with version 9.2. It is working fine there.
Changing the column type on the fly was not possible in the ancient version 7.4. Check the old manual. You had to add another column, update it with the (possibly transformed) values and then drop the old one, rename the new one - preferably in a single transaction. With side effects on views or other depending objects ...
To avoid this kind of problem altogether I suggest to use plain text or varchar (without length modifier) for character data. Details in this related question.
Remove the word TYPE, that syntax wasn't recognized 10 years ago, but you should be fine without it.
Related
After the Postgresql update v15, I realised that even I have a column that accepts UUID data type, it will throw me similar error like this whenever I try to Insert UUID data into the table :
Script :
INSERT INTO public.testing(uuid, rating) VALUES (${uuid}, ${rating})'
Error:
error running query error: trailing junk after numeric literal at or near "45c"
Postgresql 15 release note:
Prevent numeric literals from having non-numeric trailing characters (Peter Eisentraut)
Is there any solution for this issue? Or there an alternative data type that allows storing UUID into my table?
It seems that you forgot the single quotes around the UUID, so that the PostgreSQL parser took the value for a subtraction and complained that there were letters mixed in with the digits. This may throw a different error on older PostgreSQL versions, but it won't do the right thing either.
Be careful about SQL injection when you quote the values.
We had a table with a column with a Clob(length 1048576) that would store search text that helps with searching. When I transferred it from DB2 to Postgres in our migration, I found that it wasn't working as well. So I going to try text or varchar, but I was finding it would take much longer for the long text entries to be added to the table to the point my local wildfly window would timeout when trying to run.
What is the equilavelent of a datatpye that accepts text that I should be using in postgres to replace a Clob that was length 1048576 in DB2? It might be that I was using the right datatypes but didn't have the right corresponding size.
Use text. That is the only reasonable data type for long character strings.
I'm programatically adding data to a PostgreSQL table using Python and psycopg - this is working fine.
Occasionally though, a text value is too long for the containing column, so I get the message:
ERROR: value too long for type character varying(1000)
where the number is the width of the offending column.
Is there a way to determine which column has caused the error? (Aside from comparing each column's length to see whether it is 1000)
Many thanks to #Tometzky, whose comment pointed me in the right direction.
Rather than trying to determine which column caused the problem after the fact, I modified my Python script to ensure that the value was truncated before inserting into the database.
access the table's schema using select column_name, data_type, character_maximum_length from information_schema.columns where table_name='test'
when building the INSERT statement, use the schema definition to identify character fields and truncate if necessary
I don't think there's an easy way.
I tried to set VERBOSITY in psql, as I assumed this would help, but unfortunately not (on 9.4):
psql
\set VERBOSITY verbose
dbname=> create temporary table test (t varchar(5));
CREATE TABLE
dbname=> insert into test values ('123456');
ERROR: 22001: value too long for type character varying(5)
LOCATION: varchar, varchar.c:623
This might be something that warrants discussion on the mailing list, as you are not the only one with this problem.
In SQL Server 2005, I built a trigger that contains a SQL statement that unpivot's some data. Somewhat similar to the following simple example: http://sqlfiddle.com/#!3/cdc1b/1/0. Let's say that the table the trigger is built on is "table1" and it's set run after updates.
Within SSMS whenever I update "table1" everything works fine. Unfortunately, whenever I update "table1" in a proprietary application (which I don't have the source code to), it fails with the message "The type of column conflit with the type of other columns specified in the UNPIVOT list".
After doing a bit of searching I added COLLATE DATABASE_DEFAULT to my cast's in the view without any luck. It was a bit of a long shot because the collation all matched whenever I queried INFORMATION_SCHEMA.COLUMNS.
I then changed the casts from VARCHAR to CHAR and it worked without issue. For obvious reasons, I'd rather use VARCHAR. What is different between a SSMS and application connection? I assume the application isn't using a connection property that SSMS uses.
PS: The database is a bit funky because it does not use NULLs and uses CHAR instead of VARCHAR.
We have a Companies table in our database with a "Name" varchar column and its size is currently 30 characters. Our seemingly simple task of changing its size to 50 characters has turned into a bit of an issue. When trying to change it and save the changes through Sybase Central we get the following error:
[Sybase][ODBC Driver][SQL Anywhere]Illegal column definition: Name
SQLCODE: -1046
SQLSTATE: 42000
SQL Statement: ALTER TABLE "DBA"."Companies" ALTER "Name" VARCHAR(50)
We've tried various escaping characters around the column thinking it might have something to do with the word "Name" being treated differently internally by Sybase. We have no indexes or constraints on this column and have removed the single trigger that did exist just trying to isolate any potential factors. Further perplexing us, we have a Companies_a table that keeps track of all changes to the Companies table that has nearly the exact same schema including the Name column. We are able to change that column's size without issue which seems to indicate it's not necessarily an issue with the word "Name". I've gone through all the tabs in Sybase Central for this table, and don't see anything special/different about this table or this column.
Googling this issue is difficult as the word "Name" is extremely common. We have workarounds we can do (ie. creating temp column, copying, dropping & recreating the column, copying back) but if possible I'd like to understand exactly what's happening here and why.
I believe our Companies table and this column were likely created in a previous version of ASA. The inline_max column in sys.systabcol was set to null and we could not change it.
Running the following statement set the defaults:
ALTER TABLE "DBA"."Companies" ALTER "Name" inline use default prefix use default;
I was then able to run this statement without error:
ALTER TABLE "DBA"."Companies" ALTER "Name" VARCHAR(50);
Here's where I found the general concept for the fix:
Sybase SqlAnywhere forum thread
ALTER TABLE [tableName]
ALTER [ColumnName] varchar(4000) NULL