I have a table already in BQ that is populated with data. I want to rename the headings (update the schema) of the table. I'm using command line tool
Presuming it's something along the lines of this??
bq update --schema:Col1:STRING,Col2:STRING....... data_set.Table_Name
But I'm getting
FATAL Flags parsing error: Unknown command line flag 'schema:Col1:STRING,Col2:STRING.....'
What am I missing?
As Mosha says, renaming columns is not supported via API, but you could run a query that scans the whole table and overwrites it.
bq query --nouse_legacy_sql \
--destination_table p:d.table \
--replace \
'SELECT * EXCEPT(col1,col2), col1 AS newcol1, col2 AS newcol2 FROM `p.d.table`'
Warning: This overwrites the table. But that's what you wanted anyways.
Now BigQuery supports renaming the column's via sql query
ALTER TABLE [IF EXISTS] table_name
RENAME COLUMN [IF EXISTS] column_to_column[, ...]
column_to_column :=
old_column_name TO new_column_name
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#alter_table_rename_column_statement
The correct syntax on command line would be
bq update --schema col1:STRING,col2,STRING dataset.table
However, renaming fields is not supported schema change - you will get error message saying
Provided Schema does not match table
You can only add new fields or relax existing fields (i.e. from REQUIRED to NULLABLE).
Related
I have 2 schemas in the same database (postgresql).
schema1
schema2
Each schema has table users and mail column.
how can i copy the content from mail column in schema1.users to mail column to schema2.users for all rows
i tried:
update schema1.users
set mail=(select mail from schema2.users);
but didn't work.
You can do an UPDATE joining the tables, assuming both of your tables have matching ID's, it'll look like this:
UPDATE schema1.users a
SET mail=b.mail
FROM schema2.users b
WHERE a.id=b.id
What I'm doing is Joining the tables and updating mail on schema1.users for every matching id.
EDIT: I just read that you actually wanted to update the schema2.users mails. The query will be this one:
UPDATE schema2.users a
SET mail=b.mail
FROM schema1.users b
WHERE a.id=b.id
you can join the two tables. I joined on user but i don't know what the tablelayout looks like.
update schema2.users
set mail=s1.mail
from schema1.mail as s1
where users.user = s1.user
Not an elegant solution but this is what I am doing from the command line in a script to copy a specific table from one schema to a specific table in a different schema
set search_path to schema1;
\copy (select field1, nextfield, morefields from table_in_schema1 ) TO export.sql delimiter ',' ;
set search_path to schema2;
\copy table_name_in_schema2(matchingfield1, field_that_matches_next, another_field_forPosition2, ) FROM 'export.sql' DELIMITER ',' ;
I have several tables that I need to export to a single table in the new schema so I have this scripted so I have a loop that reads a file containing the table names to import into schema two and then send them to the database by using the command below - which will load around 30 tables to the table in the second schema.
psql -d alltg -c "\i bulk_import.sql"
I am trying to rename a table in db2 like so
rename table schema1.mytable to schema2.mytable
but getting the following error message:
the name "mytable" has the wrong number of qualifiers.. SQLCODE=-108,SQLSTATE=42601
what is the problem here.... I am using the exact syntax from IBM publib documentation.
You cannot change the schema of a given object. You have to recreate it.
There are severals ways to do that:
If you have only one table, you can export and import/load the table. If you use the IDX format, the DDL will be included in the generated file. If using another format, the table has be created.
You can recreate the table by using:
Create table schema2.mytable like schema1.mytable
You can extract the DDL with the db2look tool
If you are changing the schema name for a schema given, you can use ADMIN_COPY_SCHEMA
These last two options only create the table structure, and you still need to import the data. After having create the table, you insert the data by different ways:
Inserting directly
insert into schema2.mytable select * from schema1.mytable
Via load from cursor
Via a Load or import from file (The file exported in the previous step)
The problem is the foreign relations, because they have to be recreated.
Finally, you can create an alias. It is easier, and you do not have to deal with relations.
You can easily rename a table with this statement:
RENAME TABLE SCHEMA.TABLENAME TO NEWTABLENAME;
You're not renaming table in provided example, you're trying to move to different schema, it's not the same thing. Look into db2move tool for this.
if you want to rename a table in the same schema, you can use like this.
RENAME TABLE schema.table_name TO "new_table_name";
Otherwise, you can use tools like DBeaver to rename or copy tables in a db2 db.
What if you leave it as is and create an alias with the new name and schema.
Renaming a table means to rename a table within same schema .To rename in other schema ,db2 call its ALIAS:
db2 create alias for
I've a lot of records that are originally from MySQL. I massaged the data so it will be successfully inserted into PostgreSQL using ActiveRecord. This I can easily do with insertions on row basis i.e one row at a time. This is very slow I want to do bulk insert but this fails if any of the rows contains invalid data. Is there anyway I can achieve bulk insert and only the invalid rows failing instead of the whole bulk?
COPY
When using SQL COPY for bulk insert (or its equivalent \copy in the psql client), failure is not an option. COPY cannot skip illegal lines. You have to match your input format to the table you import to.
If data itself (not decorators) is violating your table definition, there are ways to make this a lot more tolerant though. For instance: create a temporary staging table with all columns of type text. COPY to it, then fix offending rows with SQL commands before converting to the actual data type and inserting into the actual target table.
Consider this related answer:
How to bulk insert only new rows in PostreSQL
Or this more advanced case:
"ERROR: extra data after last expected column" when using PostgreSQL COPY
If NULL values are offending, remove the NOT NULL constraint from your target table temporarily. Fix the rows after COPY, then reinstate the constraint. Or take the route with the staging table, if you cannot afford to soften your rules temporarily.
Sample code:
ALTER TABLE tbl ALTER COLUMN col DROP NOT NULL;
COPY ...
-- repair, like ..
-- UPDATE tbl SET col = 0 WHERE col IS NULL;
ALTER TABLE tbl ALTER COLUMN col SET NOT NULL;
Or you just fix the source table. COPY tells you the number of the offending line. Use an editor of your preference and fix it, then retry. I like to use vim for that.
INSERT
For an INSERT (like commented) the check for NULL values is trivial:
To skip a row with a NULL value:
INSERT INTO (col1, ...
SELECT col1, ...
WHERE col1 IS NOT NULL
To insert sth. else instead of a NULL value (empty string in my example):
INSERT INTO (col1, ...
SELECT COALESCE(col1, ''), ...
A common work-around for this is to import the data into a TEMPORARY or UNLOGGED table with no constraints and, where data in the input is sufficiently bogus, text typed columns.
You can then do INSERT INTO ... SELECT queries against the data to populate the real table with a big query that cleans up the data during import. You can use a lot of CASE statements for this. The idea is to transform the data in one pass.
You might be able to do many of the fixes in Ruby as you read the data in, then push the data to PostgreSQL using COPY ... FROM STDIN. This is possible with Ruby's Pg gem, see eg https://bitbucket.org/ged/ruby-pg/src/tip/sample/copyfrom.rb .
For more complicated cases, look at Pentaho Kettle or Talend Studio ETL tools.
I'm wondering if I can use a trigger on a table to "ignore" columns that are in a COPY statement from STDIN but which are not in the target table. Sorry if the wording/syntax of the question is off, but here is and explanation of what I'm trying to say. I'm new to triggers so any advice is helpful.
I'm using the PostGIS Shapefile importer to copy shapefiles to the spatial tables in my PostgreSQL database.
This creates a COPY statement which contains all the fields in the shapefile something like:
COPY "public"."stations" ("column1","column2","column3","column4", geom) FROM stdin;
column1 and column2 are in the file but not in the target table, so the COPY fails.
Is there a way to create a trigger to create something that would have the same result as:
COPY "public"."stations" ("column3","column4", geom) FROM stdin;
No, you cannot skip columns that are present in the input file. This will error out, before triggers are even invoked. And you cannot use rules either. I quote the manual:
COPY FROM will invoke any triggers and check constraints on the
destination table. However, it will not invoke rules.
You can either edit the file or use a temporary staging table:
COPY to a temporary table with matching columns.
Use INSERT to write the desired columns to the final target table(s) - or the whole range of SQL DDL commands for more sophisticated matters.
I have an issue with psql. I am trying to select the records from a table but psql acts like the table doesnt exist. I have tried finding it and found that it resides in the 'public' schema. I have tried selecting from this table like so:
highways=# SELECT * FROM public.CLUSTER_128000M;
This does not work stating the following:
ERROR: relation 'public.CLUSTER_128000M' does not exist
I know that it definetly exists and that it is definetly in the 'public' schema so how can I perform a select statement on it?
Edit:
This was caused by useing FME to create my tables. As a result FME used " marks on the table names making them case sensitive. To reverse this see the comments below.
This issue was caused by the third party software FME using quotes around the names of the tables at time of creation. The solution to make the tables useable again was to use the following command:
ALTER TABLE "SOME_NAME" RENAME TO some_name