Using Dbeaver, I am trying to add a new column using an existing column to update it. However, after an update to the new column, other column values are changed (to NULL). I am unsure what is the issue?
Before the update:
My statements:
ALTER TABLE project1.nhd
ADD "Address" varchar;
UPDATE project1.nhd
SET "Address" = SPLIT_PART("PropertyAddress", ',', 1);
After the update:
I thought the issue was the method of data transfer, so I tried importing the CSV as a database in Dbeaver and then exporting it to the table and using COPY to import the data into the table. However, the result was the same.
Related
I'm using CRecordset to query one table, but I use a second table to filter data. If in my GetDefaultSQL override method I return a table list with more than one table then I get this ERROR: column reference "ctid" is ambiguous. I know what a "ctid" column is, but I don't use it in my code. It's inserted into the original SQL statement by ODBC driver. How to fix this? How to tell the ODBC driver not to insert the "ctid" column?
I tried to call CRecordset::Open with readOnly parameter, as I assume that ODBC needs ctid to update the row, and I don't need to update them. But the error remains.
Also tried to add a primary key to the second table that was missing it, thinking if a table has a primary key then ODBC can use that instead of 'ctid', but again no luck. Makes sense though, because I don't fetch any column of that second table, and the second table is used just for filtering.
If I make a DB view to work around the issue, I get ERROR: column "ctid" does not exist.
You have to call CRecordset::Open with two parameters changed:
m_pSet->Open(CRecordset::snapshot, NULL, CRecordset::readOnly);
Then you can fetch both the joined tables and the view without errors. No "ctid" then.
I have an existing table, I want to insert a new column and update values in the whole table so that I do not have to refill the table again.
But the problem is, I have a route column which is present in the format shown below. I want to add a new column route_name where I will not include data after the 2nd underscore '_'
How do I do this by running a query?
route route_name (should look like)
dehradun_delhi_09:30_am dehradun_delhi
katra_delhi_07:30_pm katra_delhi
delhi_katra_08:00_pm delhi_katra
bangalore_chennai_10:45_pm bangalore_chennai
delhi_lucknow_09:00_pm delhi_lucknow
chennai_bangalore_10:30_pm chennai_bangalore
lucknow_varanasi_10:30_pm lucknow_varanasi
varanasi_lucknow_09:30_pm varanasi_lucknow
delhi_katra_08:00_pm delhi_katra
katra_delhi_07:30_pm katra_delhi
delhi_jalandhar_10:00_pm delhi_jalandhar
jalandhar_delhi_11:00_am jalandhar_delhi
delhi_amritsar_11:00_pm delhi_amritsar
amritsar_delhi_11:00_pm amritsar_delhi
Please tell me what query should I run so that the data backfilled also gets updated and a new column called route_name gets updated in the existing table
You need to do this in two steps.
First you add the column:
alter table route_table add column route_name text;
and then populate it:
update route_table set route_name=split_part(route,'_',1)
I am using PostgreSQL 9.6. I have created a table with create query.
But when i checked in left panel of pgAdmin, under table i found more six columns named tableid,cmax,xmax,cmin,xmin and ctid.
When i searched about this, I found that these are OIDs column and does not affect to data on other columns.
I have to import data into this table. So after selecting table, from right click i got option for import/Export. So from that i am importing .csv file.
But when i tried to import the data in table, i am getting error like,
ERROR: column "tableoid" of relation "account" does not exist
Please suggest me how to eliminate these OID columns from table.
You must be missing some column that is present in the csv named "tableoid".
In this case ,TABLE according to the import file must be created first. IF there is no prior table , it wont work. This may help.
http://www.postgresqltutorial.com/import-csv-file-into-posgresql-table/
I've a lot of records that are originally from MySQL. I massaged the data so it will be successfully inserted into PostgreSQL using ActiveRecord. This I can easily do with insertions on row basis i.e one row at a time. This is very slow I want to do bulk insert but this fails if any of the rows contains invalid data. Is there anyway I can achieve bulk insert and only the invalid rows failing instead of the whole bulk?
COPY
When using SQL COPY for bulk insert (or its equivalent \copy in the psql client), failure is not an option. COPY cannot skip illegal lines. You have to match your input format to the table you import to.
If data itself (not decorators) is violating your table definition, there are ways to make this a lot more tolerant though. For instance: create a temporary staging table with all columns of type text. COPY to it, then fix offending rows with SQL commands before converting to the actual data type and inserting into the actual target table.
Consider this related answer:
How to bulk insert only new rows in PostreSQL
Or this more advanced case:
"ERROR: extra data after last expected column" when using PostgreSQL COPY
If NULL values are offending, remove the NOT NULL constraint from your target table temporarily. Fix the rows after COPY, then reinstate the constraint. Or take the route with the staging table, if you cannot afford to soften your rules temporarily.
Sample code:
ALTER TABLE tbl ALTER COLUMN col DROP NOT NULL;
COPY ...
-- repair, like ..
-- UPDATE tbl SET col = 0 WHERE col IS NULL;
ALTER TABLE tbl ALTER COLUMN col SET NOT NULL;
Or you just fix the source table. COPY tells you the number of the offending line. Use an editor of your preference and fix it, then retry. I like to use vim for that.
INSERT
For an INSERT (like commented) the check for NULL values is trivial:
To skip a row with a NULL value:
INSERT INTO (col1, ...
SELECT col1, ...
WHERE col1 IS NOT NULL
To insert sth. else instead of a NULL value (empty string in my example):
INSERT INTO (col1, ...
SELECT COALESCE(col1, ''), ...
A common work-around for this is to import the data into a TEMPORARY or UNLOGGED table with no constraints and, where data in the input is sufficiently bogus, text typed columns.
You can then do INSERT INTO ... SELECT queries against the data to populate the real table with a big query that cleans up the data during import. You can use a lot of CASE statements for this. The idea is to transform the data in one pass.
You might be able to do many of the fixes in Ruby as you read the data in, then push the data to PostgreSQL using COPY ... FROM STDIN. This is possible with Ruby's Pg gem, see eg https://bitbucket.org/ged/ruby-pg/src/tip/sample/copyfrom.rb .
For more complicated cases, look at Pentaho Kettle or Talend Studio ETL tools.
I wanted to prepare a load utility to load the data into DB2 table. The table has columns which contains GENERATEDALWAYS feature set.
So, I am not able to load an unloaded details from the table.
Is it possible to use import for tables having columns with GENERATEDALWAYS set?
Steps I did:
1. db2 "export to tbl.txt of del modified by coldel| select * from <schema.table> where col=value"
2. db2 "delete from <schema.table> where col=value"
3. db2 "import from tbl.txt of del modified by coldel| allow write access warningcount1 insert into <schema.table>"
The columns with "GENERATEDALWAYS" is having NEW Value after import. Is it possible to use import to populate GENERATEDALWAYS columns to have the old values?
Appreciate the assistance.
Thanks,
Mathew Liju
What you are asking is not possible. With IMPORT you can't override columns that have GENERATED ALWAYS. As #Peter Miehle suggests you could alter the table to specify that the column is GENERATED BY DEFAULT, but this may break other applications.
Your question's title implies that you don't want to use the LOAD utility (but you don't mention anything about it in the actual question). However, LOAD is the only way to write data into the table and maintain the values for the generated column as they exist in the file:
db2 "load from tbl.txt of del modified by generatedoverride insert into schema.table"
If you do this, be aware that:
DB2 does not check if there are conflicts with existing rows in the table. You would need to define a unique index on the column(s) in question to resolve this; this would cause DB2 to delete the rows that you just loaded in the DELETE phase of the load.
If your generated column(s) are using IDENTITY, make sure that you alter the column to ensure that future generated values do not conflict with the rows that you just inserted into the table.
maybe you can drop the "generation" from the column and add it after importing with the appropriate values again.
#Ian Bjorhovde has given you the options.
IMPORT actually does INSERTs in the background - ie, it first prepares a INSERT statement with parameter markers and uses the values in the input file for those markers.
In your SQL snapshot you will see INSERT statement that is used.
Anything that is not possible in an INSERT statement isn't possible with IMPORT (kind of .. )