Redshift - truncating string when inserting to target - amazon-redshift

I have two tables. Table A is an operational store and table B is the destination table.
Table A DDL:
Column A Varchar(1000)
Table B DDL:
Column B Varchar(250)
So I'm trying to do an insert of truncated column A as so:
Insert into table B (select left(table a.column a, 249)) , but it gives the error
"error: Value too long for character type"
I have also tried substring to try and truncate the text but to no avail. Please note, that there is also Arabic text in Column A - but it hasn't been an issue in Table A.
Any help / suggestions would be much appreciated!

To get around the issue of multi-byte characters, you can cast your field to the desired VarChar size using ::VarChar([some length]). For your example, you would do:
Insert into table B (select table a.column a::VarChar(250))

The problem is that each Arabic symbol taking more than 1 byte because RedShift is Unicode DB. The varchar definition is in bytes. So to be on safe side you can divide everything by 4.

Related

Alter Column TYPE USING - cannot change from varchar to numeric

I would be eternaly grateful if somebody could help me a bit. I am totally new to Postrgresql 10
I had a large file millions of lines 73 columns, I could not imported so I set all the columns to varchar. Now I need to manipulate the data I cannot change the datatype. I tried for hours. The column contains a few values with 1 or 2 decimals. This is what I am doing:
ALTER TABLE table1
ALTER COLUMN facevalue TYPE numeric USING (facevalue::numeric);
this is the error I get
ERROR: invalid input syntax for type numeric: " "
SQL state: 22P02
Thank you for your time and consideration
you apparently have empty strings or whitespace only values. You need to convert them to NULL
ALTER TABLE table1
ALTER COLUMN facevalue TYPE numeric USING (nullif(trim(facevalue),'')::numeric);

How to convert a currency column of a table into a numeric column in postgresql

We need to make our schema support multiple currencies. So, using currency field is not a option. So, I am trying to convert currency column into numeric(12,2). I tried the following approaches:
ALTER TABLE lead ALTER COLUMN deal_size TYPE NUMERIC(12, 2);
ALTER TABLE lead ALTER COLUMN deal_size TYPE NUMERIC(12, 2) using deal_size::money::numeric(12,2);
each time I get the following error:
ERROR: numeric field overflow
DETAIL: A field with precision 12, scale 2 must round to an absolute value less than 10^10.
I verified that none of the values for this column in the table is more than $1,000,000
I test the following in my PostgreSQL and works perfectly well. What version of PostgreSQL are you running?
create temp table lead (id serial not null primary key, deal_size money);
insert into lead (deal_size) select (random()*100000000)::numeric(14,4) from generate_series(1,10000) a;
ALTER TABLE lead ALTER COLUMN deal_size TYPE NUMERIC(12, 2);
You don't have values greater than a million. Have you tested for large negative values?

Attempts to alter a Postgresql column type from varchar to bytea hangs indefinitely

I've got a table with 4 rows in it in a non-production database used for development. There are 2 varchar columns that I want to convert to bytea. I don't care about the contents so I could of course drop the columns and then add them back, but I became confused when I tried to just change the type:
alter table whatever
alter column col_1 set data type bytea using null,
alter column col_2 set data type bytea using null;
When I try that, the psql client just hangs. By that I mean that it just sits there giving no feedback until I eventually hit ^C and it aborts. I've tried that with a little test table and it works fine, but for some reason it doesn't work on the real table (which, really, is also just a "little test table").
The using clause doesn't seem to make a difference one way or the other; I can leave it out or give other values, and the command does the same thing.
I don't get an error, I just don't get anything. Is that what I should expect?
I'm on 9.1 on ubuntu 14.10 if it matters.
I don't care about the contents
In that case, this works on an empty table:
ALTER TABLE tablename
ALTER COLUMN colname TYPE bytea USING colname::bytea
;
Simple:
Get the active locks from pg_locks:
select t.relname,l.locktype,page,virtualtransaction,pid,mode,granted from pg_locks l, pg_stat_all_tables t where l.relation=t.relid order by relation asc;
Copy the pid(ex: 14210) from above result and substitute in the below command.
SELECT pg_terminate_backend('14210')

Firebird 1.5 : Duplicate columns in a table

I noticed that Firebird creates duplicate columns for a single table, thus incorrect indices are being used in the query which cause query to be slow. Please example below.
I have 2 tables with the same columns and indices, but when checking the table structure, one table shows duplicate columns
Table A : Name VARCHAR(30)
Age INTEGER
BIRTH_DATE TIMESTAMP
Indices : Name, Birth_date (Asc), Birth_date(Desc)
Table B : Name VARCHAR(30)
Age INTEGER
BIRTH_DATE TIMESTAMP
Name VARCHAR(30)
Age INTEGER
BIRTH_DATE TIMESTAMP
Indices : Name, Birth_date (Asc), Birth_date(Desc)
When joining the table with Table C and order by Birth_date, Table A is using the Birth_date index Ordered, but Table B is not.
Please help! what is the cause behind this? Thank you.
I just had a problem where a duplicate column had been allowed to be created. This request
SELECT a.RDB$FIELD_NAME
FROM RDB$RELATION_FIELDS a
WHERE a.RDB$FIELD_NAME like '%COLUMN_NAME%'
was showing two COLUMN_NAME lines. By copy pasting the fields elsewhere it became apparent that one column had trailing whitespace, while the other had a carriage return + line feed (CRLF) and then trailing whitespace.
The FlameRobin wizard was used to create the column. My take on it is a copy-paste was used and a CRLF was inserted. Excel and other softwares can do that to you. FlameRobin, FlameRobin's driver and FireBird should each guard against that, though.
We dropped the offending column by crafting some DDL which had the offending CRLF in the column name.

text[] in postgresql?

I saw a field text[] (text array) in Postgresql.
As far as I understood,it can store multiple text data in a single column.
I tried to read more about it the manual: http://www.postgresql.org/docs/current/static/datatype-character.html but unfortunately nothing much was there about text[] column type.
So can anyone help me to understand
How to add a new value to text[] column?
What will be the resultset when we query to retrieve the values of text[] column?
EDIT
I have a table containing 2 columns group_name and members. Each time a new person join the group,the new person's id should be inserted in the column members for that group_name. This is my requirement.A Group can contain 'n' number of members
EDIT 2
Pablo is asking me to use two tables instead. May I know how this could be solved by using two different tables? Right now I am using comma(,) to store multiple values separated by comma. Is this method wrong?
To insert new values just do:
insert into foo values (ARRAY['a', 'b']);
Assuming you have this table:
create table foo (a text[]);
Every time you do a select a from foo you will have a column of type array:
db1=> select a from foo;
a
-------
{a,b}
(1 row)
If you want a specific element from the array, you need to use subscripts (arrays in PostgreSQL are 1-based):
db=> select a[1] from foo;
a
---
a
(1 row)
Be careful when choosing an array datatype for your PostgreSQL tables. Make sure you don't need a child table instead.