Rename column without breaking functions - postgresql

Is there a way to rename a table column such that all references to that column in existing functions are automatically updated?
e.g. Doing this
ALTER TABLE public.person RENAME COLUMN name TO firstname;
would automatically change a reference like the following in any function:
return query
select * from person where name is null;

Since function bodies are just strings, there is no way to automatically change references to columns in function bodies when you rename a column.

Related

add a column to a table which just references an existing column

Is there a way to add a column alias to an existing table, which just references another existing column in the table? such that reads and writes to the new column name will go to the existing column name. Sort of how a view in postgres can act as a read / write alias:
create view temp_order_contacts as (select * from order_emails)
This will make read / write possible to order_emails table but by calling temp_order_contacts instead.
Is there something similar but for columns?
Assuming this is for backwards compatibility; you want to rename a column, but you also want to existing queries to still work.
You can rename the table and create a view with the original name.
-- Move the existing table out of the way.
alter table some_table rename to _some_table;
-- Create a view in its place.
create view some_table as (
select
*,
-- provide a column alias
some_column as some_other_column
from _some_table
);

How to add a column in the existing table and update all the existing data in postgresql?

I have an existing table, I want to insert a new column and update values in the whole table so that I do not have to refill the table again.
But the problem is, I have a route column which is present in the format shown below. I want to add a new column route_name where I will not include data after the 2nd underscore '_'
How do I do this by running a query?
route route_name (should look like)
dehradun_delhi_09:30_am dehradun_delhi
katra_delhi_07:30_pm katra_delhi
delhi_katra_08:00_pm delhi_katra
bangalore_chennai_10:45_pm bangalore_chennai
delhi_lucknow_09:00_pm delhi_lucknow
chennai_bangalore_10:30_pm chennai_bangalore
lucknow_varanasi_10:30_pm lucknow_varanasi
varanasi_lucknow_09:30_pm varanasi_lucknow
delhi_katra_08:00_pm delhi_katra
katra_delhi_07:30_pm katra_delhi
delhi_jalandhar_10:00_pm delhi_jalandhar
jalandhar_delhi_11:00_am jalandhar_delhi
delhi_amritsar_11:00_pm delhi_amritsar
amritsar_delhi_11:00_pm amritsar_delhi
Please tell me what query should I run so that the data backfilled also gets updated and a new column called route_name gets updated in the existing table
You need to do this in two steps.
First you add the column:
alter table route_table add column route_name text;
and then populate it:
update route_table set route_name=split_part(route,'_',1)

PostgreSQL - Dynamic addition of large no of columns

Assume I have a table named tracker with columns (issue_id,ingest_date,verb,priority)
I would like to add 50 columns to this table.
Columns being (string_ch_01,string_ch_02,.....,string_ch_50) of datatype varchar.
Is there any better way to add columns with single procedure rather than executing the following alter command 50 times?
ALTER TABLE tracker ADD COLUMN string_ch_01 varchar(1020);
Yes, a better way is to issue a single ALTER TABLE with all the columns at once:
ALTER TABLE tracker
ADD COLUMN string_ch_01 varchar(1020),
ADD COLUMN string_ch_02 varchar(1020),
...
ADD COLUMN string_ch_50 varchar(1020)
;
It's especially better when there are DEFAULT non-null clauses for the new columns, since each of them would rewrite the entire table, as opposed to rewriting it only once if they're grouped in a single ALTER TABLE.

Import column from file with additional fixed fields

Can I somehow import a column or columns from a file, where I specify one or more fields held fixed for all rows?
For example:
CREATE TABLE users(userid int PRIMARY KEY, fname text, lname text);
COPY users (userid,fname) from 'users.txt';
but where lname is assumed to be 'SMITH' for all the rows in users.txt?
My actual setting is more complex, where the field I want to supply for all rows is part of the PRIMARY KEY.
Possibly something of this nature:
COPY users (userid,fname,'smith' as lname) from 'users.txt';
Since I can't find a native solution to this in Cassandra, my solution was to perform a preparation step with Perl so the file contained all the relevant columns prior to calling COPY. This works fine, although I would prefer an answer that avoided this intermediate step.
e.g. adding a column with 'Smith' for every row to users.txt and calling:
COPY users (userid,fname,lname) from 'users.txt';

How can I copy an IDENTITY field?

I’d like to update some parameters for a table, such as the dist and sort key. In order to do so, I’ve renamed the old version of the table, and recreated the table with the new parameters (these can not be changed once a table has been created).
I need to preserve the id field from the old table, which is an IDENTITY field. If I try the following query however, I get an error:
insert into edw.my_table_new select * from edw.my_table_old;
ERROR: cannot set an identity column to a value [SQL State=0A000]
How can I keep the same id from the old table?
You can't INSERT data setting the IDENTITY columns, but you can load data from S3 using COPY command.
First you will need to create a dump of source table with UNLOAD.
Then simply use COPY with EXPLICIT_IDS parameter as described in Loading default column values:
If an IDENTITY column is included in the column list, the EXPLICIT_IDS
option must also be specified in the COPY command, or the COPY command
will fail. Similarly, if an IDENTITY column is omitted from the column
list, and the EXPLICIT_IDS option is specified, the COPY operation
will fail.
You can explicitly specify the columns, and ignore the identity column:
insert into existing_table (col1, col2) select col1, col2 from another_table;
Use ALTER TABLE APPEND twice, first time with IGNOREEXTRA and the second time with FILLTARGET.
If the target table contains columns that don't exist in the source
table, include FILLTARGET. The command fills the extra columns in the
source table with either the default column value or IDENTITY value,
if one was defined, or NULL.
It moves the columns from one table to another, extremely quickly; took me 4s for 1GB table in dc1.large node.
Appends rows to a target table by moving data from an existing source
table.
...
ALTER TABLE APPEND is usually much faster than a similar CREATE TABLE
AS or INSERT INTO operation because data is moved, not duplicated.
Faster and simpler than UNLOAD + COPY with EXPLICIT_IDS.